00:00:00.000 Started by upstream project "autotest-nightly" build number 4126 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3488 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.131 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.132 The recommended git tool is: git 00:00:00.132 using credential 00000000-0000-0000-0000-000000000002 00:00:00.137 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.180 Fetching changes from the remote Git repository 00:00:00.182 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.220 Using shallow fetch with depth 1 00:00:00.220 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.220 > git --version # timeout=10 00:00:00.260 > git --version # 'git version 2.39.2' 00:00:00.260 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.283 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.283 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.775 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.786 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.817 Checking out Revision 7510e71a2b3ec6fca98e4ec196065590f900d444 (FETCH_HEAD) 00:00:07.817 > git config core.sparsecheckout # timeout=10 00:00:07.866 > git read-tree -mu HEAD # timeout=10 00:00:07.881 > git checkout -f 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=5 00:00:07.899 Commit message: "kid: add issue 3541" 00:00:07.899 > git rev-list --no-walk 7510e71a2b3ec6fca98e4ec196065590f900d444 # timeout=10 00:00:07.986 [Pipeline] Start of Pipeline 00:00:07.998 [Pipeline] library 00:00:07.999 Loading library shm_lib@master 00:00:08.000 Library shm_lib@master is cached. Copying from home. 00:00:08.018 [Pipeline] node 00:00:08.045 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:08.047 [Pipeline] { 00:00:08.056 [Pipeline] catchError 00:00:08.057 [Pipeline] { 00:00:08.068 [Pipeline] wrap 00:00:08.076 [Pipeline] { 00:00:08.083 [Pipeline] stage 00:00:08.085 [Pipeline] { (Prologue) 00:00:08.101 [Pipeline] echo 00:00:08.103 Node: VM-host-SM9 00:00:08.109 [Pipeline] cleanWs 00:00:08.118 [WS-CLEANUP] Deleting project workspace... 00:00:08.118 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.124 [WS-CLEANUP] done 00:00:08.300 [Pipeline] setCustomBuildProperty 00:00:08.366 [Pipeline] httpRequest 00:00:09.021 [Pipeline] echo 00:00:09.023 Sorcerer 10.211.164.101 is alive 00:00:09.031 [Pipeline] retry 00:00:09.033 [Pipeline] { 00:00:09.045 [Pipeline] httpRequest 00:00:09.051 HttpMethod: GET 00:00:09.051 URL: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:09.052 Sending request to url: http://10.211.164.101/packages/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:09.066 Response Code: HTTP/1.1 200 OK 00:00:09.067 Success: Status code 200 is in the accepted range: 200,404 00:00:09.068 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:14.410 [Pipeline] } 00:00:14.428 [Pipeline] // retry 00:00:14.435 [Pipeline] sh 00:00:14.719 + tar --no-same-owner -xf jbp_7510e71a2b3ec6fca98e4ec196065590f900d444.tar.gz 00:00:14.736 [Pipeline] httpRequest 00:00:15.486 [Pipeline] echo 00:00:15.488 Sorcerer 10.211.164.101 is alive 00:00:15.498 [Pipeline] retry 00:00:15.500 [Pipeline] { 00:00:15.517 [Pipeline] httpRequest 00:00:15.522 HttpMethod: GET 00:00:15.522 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:15.523 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:15.544 Response Code: HTTP/1.1 200 OK 00:00:15.545 Success: Status code 200 is in the accepted range: 200,404 00:00:15.545 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:29.670 [Pipeline] } 00:01:29.692 [Pipeline] // retry 00:01:29.699 [Pipeline] sh 00:01:29.977 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:32.524 [Pipeline] sh 00:01:32.804 + git -C spdk log --oneline -n5 00:01:32.804 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:32.804 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:01:32.804 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:01:32.804 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:01:32.804 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:01:32.824 [Pipeline] writeFile 00:01:32.840 [Pipeline] sh 00:01:33.124 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:33.136 [Pipeline] sh 00:01:33.415 + cat autorun-spdk.conf 00:01:33.415 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.415 SPDK_TEST_NVMF=1 00:01:33.415 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.415 SPDK_TEST_URING=1 00:01:33.415 SPDK_TEST_VFIOUSER=1 00:01:33.415 SPDK_TEST_USDT=1 00:01:33.415 SPDK_RUN_ASAN=1 00:01:33.415 SPDK_RUN_UBSAN=1 00:01:33.415 NET_TYPE=virt 00:01:33.415 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:33.423 RUN_NIGHTLY=1 00:01:33.425 [Pipeline] } 00:01:33.439 [Pipeline] // stage 00:01:33.454 [Pipeline] stage 00:01:33.456 [Pipeline] { (Run VM) 00:01:33.468 [Pipeline] sh 00:01:33.750 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:33.750 + echo 'Start stage prepare_nvme.sh' 00:01:33.750 Start stage prepare_nvme.sh 00:01:33.750 + [[ -n 3 ]] 00:01:33.750 + disk_prefix=ex3 00:01:33.750 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:33.750 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:33.750 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:33.750 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.750 ++ SPDK_TEST_NVMF=1 00:01:33.750 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:33.750 ++ SPDK_TEST_URING=1 00:01:33.750 ++ SPDK_TEST_VFIOUSER=1 00:01:33.750 ++ SPDK_TEST_USDT=1 00:01:33.750 ++ SPDK_RUN_ASAN=1 00:01:33.750 ++ SPDK_RUN_UBSAN=1 00:01:33.750 ++ NET_TYPE=virt 00:01:33.750 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:33.750 ++ RUN_NIGHTLY=1 00:01:33.750 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:33.750 + nvme_files=() 00:01:33.750 + declare -A nvme_files 00:01:33.750 + backend_dir=/var/lib/libvirt/images/backends 00:01:33.750 + nvme_files['nvme.img']=5G 00:01:33.750 + nvme_files['nvme-cmb.img']=5G 00:01:33.750 + nvme_files['nvme-multi0.img']=4G 00:01:33.750 + nvme_files['nvme-multi1.img']=4G 00:01:33.750 + nvme_files['nvme-multi2.img']=4G 00:01:33.750 + nvme_files['nvme-openstack.img']=8G 00:01:33.750 + nvme_files['nvme-zns.img']=5G 00:01:33.750 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:33.750 + (( SPDK_TEST_FTL == 1 )) 00:01:33.750 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:33.750 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:33.750 + for nvme in "${!nvme_files[@]}" 00:01:33.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:33.750 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:33.750 + for nvme in "${!nvme_files[@]}" 00:01:33.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:33.750 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:33.750 + for nvme in "${!nvme_files[@]}" 00:01:33.750 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:34.009 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:34.009 + for nvme in "${!nvme_files[@]}" 00:01:34.009 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:34.009 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:34.009 + for nvme in "${!nvme_files[@]}" 00:01:34.009 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:34.009 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:34.009 + for nvme in "${!nvme_files[@]}" 00:01:34.009 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:34.267 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:34.267 + for nvme in "${!nvme_files[@]}" 00:01:34.267 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:34.267 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:34.267 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:34.527 + echo 'End stage prepare_nvme.sh' 00:01:34.527 End stage prepare_nvme.sh 00:01:34.539 [Pipeline] sh 00:01:34.820 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:34.820 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:01:34.820 00:01:34.820 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:34.820 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:34.820 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:34.820 HELP=0 00:01:34.820 DRY_RUN=0 00:01:34.820 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:01:34.820 NVME_DISKS_TYPE=nvme,nvme, 00:01:34.820 NVME_AUTO_CREATE=0 00:01:34.820 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:01:34.820 NVME_CMB=,, 00:01:34.820 NVME_PMR=,, 00:01:34.820 NVME_ZNS=,, 00:01:34.820 NVME_MS=,, 00:01:34.820 NVME_FDP=,, 00:01:34.820 SPDK_VAGRANT_DISTRO=fedora39 00:01:34.820 SPDK_VAGRANT_VMCPU=10 00:01:34.820 SPDK_VAGRANT_VMRAM=12288 00:01:34.820 SPDK_VAGRANT_PROVIDER=libvirt 00:01:34.820 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:34.820 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:34.820 SPDK_OPENSTACK_NETWORK=0 00:01:34.820 VAGRANT_PACKAGE_BOX=0 00:01:34.820 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:34.820 FORCE_DISTRO=true 00:01:34.820 VAGRANT_BOX_VERSION= 00:01:34.820 EXTRA_VAGRANTFILES= 00:01:34.820 NIC_MODEL=e1000 00:01:34.820 00:01:34.820 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:34.820 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:38.110 Bringing machine 'default' up with 'libvirt' provider... 00:01:38.110 ==> default: Creating image (snapshot of base box volume). 00:01:38.110 ==> default: Creating domain with the following settings... 00:01:38.110 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727512755_9bcaac4ea6178f5c49da 00:01:38.110 ==> default: -- Domain type: kvm 00:01:38.110 ==> default: -- Cpus: 10 00:01:38.110 ==> default: -- Feature: acpi 00:01:38.110 ==> default: -- Feature: apic 00:01:38.110 ==> default: -- Feature: pae 00:01:38.110 ==> default: -- Memory: 12288M 00:01:38.110 ==> default: -- Memory Backing: hugepages: 00:01:38.110 ==> default: -- Management MAC: 00:01:38.110 ==> default: -- Loader: 00:01:38.111 ==> default: -- Nvram: 00:01:38.111 ==> default: -- Base box: spdk/fedora39 00:01:38.111 ==> default: -- Storage pool: default 00:01:38.111 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727512755_9bcaac4ea6178f5c49da.img (20G) 00:01:38.111 ==> default: -- Volume Cache: default 00:01:38.111 ==> default: -- Kernel: 00:01:38.111 ==> default: -- Initrd: 00:01:38.111 ==> default: -- Graphics Type: vnc 00:01:38.111 ==> default: -- Graphics Port: -1 00:01:38.111 ==> default: -- Graphics IP: 127.0.0.1 00:01:38.111 ==> default: -- Graphics Password: Not defined 00:01:38.111 ==> default: -- Video Type: cirrus 00:01:38.111 ==> default: -- Video VRAM: 9216 00:01:38.111 ==> default: -- Sound Type: 00:01:38.111 ==> default: -- Keymap: en-us 00:01:38.111 ==> default: -- TPM Path: 00:01:38.111 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:38.111 ==> default: -- Command line args: 00:01:38.111 ==> default: -> value=-device, 00:01:38.111 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:38.111 ==> default: -> value=-drive, 00:01:38.111 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:01:38.111 ==> default: -> value=-device, 00:01:38.111 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:38.111 ==> default: -> value=-device, 00:01:38.111 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:38.111 ==> default: -> value=-drive, 00:01:38.111 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:38.111 ==> default: -> value=-device, 00:01:38.111 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:38.111 ==> default: -> value=-drive, 00:01:38.111 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:38.111 ==> default: -> value=-device, 00:01:38.111 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:38.111 ==> default: -> value=-drive, 00:01:38.111 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:38.111 ==> default: -> value=-device, 00:01:38.111 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:38.369 ==> default: Creating shared folders metadata... 00:01:38.369 ==> default: Starting domain. 00:01:39.746 ==> default: Waiting for domain to get an IP address... 00:01:57.827 ==> default: Waiting for SSH to become available... 00:01:57.827 ==> default: Configuring and enabling network interfaces... 00:02:00.362 default: SSH address: 192.168.121.76:22 00:02:00.362 default: SSH username: vagrant 00:02:00.362 default: SSH auth method: private key 00:02:02.270 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:10.382 ==> default: Mounting SSHFS shared folder... 00:02:11.317 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:11.317 ==> default: Checking Mount.. 00:02:12.702 ==> default: Folder Successfully Mounted! 00:02:12.702 ==> default: Running provisioner: file... 00:02:13.638 default: ~/.gitconfig => .gitconfig 00:02:13.897 00:02:13.897 SUCCESS! 00:02:13.897 00:02:13.897 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:13.897 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:13.897 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:13.897 00:02:13.905 [Pipeline] } 00:02:13.921 [Pipeline] // stage 00:02:13.929 [Pipeline] dir 00:02:13.930 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:13.932 [Pipeline] { 00:02:13.947 [Pipeline] catchError 00:02:13.949 [Pipeline] { 00:02:13.961 [Pipeline] sh 00:02:14.240 + vagrant ssh-config --host vagrant 00:02:14.240 + sed -ne /^Host/,$p 00:02:14.240 + tee ssh_conf 00:02:17.528 Host vagrant 00:02:17.528 HostName 192.168.121.76 00:02:17.528 User vagrant 00:02:17.528 Port 22 00:02:17.528 UserKnownHostsFile /dev/null 00:02:17.528 StrictHostKeyChecking no 00:02:17.528 PasswordAuthentication no 00:02:17.528 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:17.528 IdentitiesOnly yes 00:02:17.528 LogLevel FATAL 00:02:17.528 ForwardAgent yes 00:02:17.528 ForwardX11 yes 00:02:17.528 00:02:17.542 [Pipeline] withEnv 00:02:17.545 [Pipeline] { 00:02:17.559 [Pipeline] sh 00:02:17.839 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:17.839 source /etc/os-release 00:02:17.839 [[ -e /image.version ]] && img=$(< /image.version) 00:02:17.839 # Minimal, systemd-like check. 00:02:17.839 if [[ -e /.dockerenv ]]; then 00:02:17.839 # Clear garbage from the node's name: 00:02:17.839 # agt-er_autotest_547-896 -> autotest_547-896 00:02:17.839 # $HOSTNAME is the actual container id 00:02:17.839 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:17.839 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:17.839 # We can assume this is a mount from a host where container is running, 00:02:17.839 # so fetch its hostname to easily identify the target swarm worker. 00:02:17.839 container="$(< /etc/hostname) ($agent)" 00:02:17.839 else 00:02:17.839 # Fallback 00:02:17.839 container=$agent 00:02:17.839 fi 00:02:17.839 fi 00:02:17.839 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:17.839 00:02:18.116 [Pipeline] } 00:02:18.137 [Pipeline] // withEnv 00:02:18.142 [Pipeline] setCustomBuildProperty 00:02:18.150 [Pipeline] stage 00:02:18.152 [Pipeline] { (Tests) 00:02:18.161 [Pipeline] sh 00:02:18.433 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:18.706 [Pipeline] sh 00:02:18.986 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:19.259 [Pipeline] timeout 00:02:19.259 Timeout set to expire in 1 hr 0 min 00:02:19.261 [Pipeline] { 00:02:19.277 [Pipeline] sh 00:02:19.556 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:20.125 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:02:20.138 [Pipeline] sh 00:02:20.418 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:20.690 [Pipeline] sh 00:02:20.970 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:21.246 [Pipeline] sh 00:02:21.526 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:21.785 ++ readlink -f spdk_repo 00:02:21.785 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:21.785 + [[ -n /home/vagrant/spdk_repo ]] 00:02:21.785 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:21.785 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:21.785 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:21.785 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:21.785 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:21.785 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:21.785 + cd /home/vagrant/spdk_repo 00:02:21.785 + source /etc/os-release 00:02:21.785 ++ NAME='Fedora Linux' 00:02:21.785 ++ VERSION='39 (Cloud Edition)' 00:02:21.785 ++ ID=fedora 00:02:21.785 ++ VERSION_ID=39 00:02:21.785 ++ VERSION_CODENAME= 00:02:21.785 ++ PLATFORM_ID=platform:f39 00:02:21.785 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:21.785 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:21.785 ++ LOGO=fedora-logo-icon 00:02:21.785 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:21.785 ++ HOME_URL=https://fedoraproject.org/ 00:02:21.785 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:21.785 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:21.785 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:21.785 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:21.785 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:21.785 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:21.785 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:21.785 ++ SUPPORT_END=2024-11-12 00:02:21.785 ++ VARIANT='Cloud Edition' 00:02:21.785 ++ VARIANT_ID=cloud 00:02:21.785 + uname -a 00:02:21.785 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:21.785 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:22.044 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:22.044 Hugepages 00:02:22.044 node hugesize free / total 00:02:22.304 node0 1048576kB 0 / 0 00:02:22.304 node0 2048kB 0 / 0 00:02:22.304 00:02:22.304 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:22.304 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:22.304 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:22.304 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:22.304 + rm -f /tmp/spdk-ld-path 00:02:22.304 + source autorun-spdk.conf 00:02:22.304 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.304 ++ SPDK_TEST_NVMF=1 00:02:22.304 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:22.304 ++ SPDK_TEST_URING=1 00:02:22.304 ++ SPDK_TEST_VFIOUSER=1 00:02:22.304 ++ SPDK_TEST_USDT=1 00:02:22.304 ++ SPDK_RUN_ASAN=1 00:02:22.304 ++ SPDK_RUN_UBSAN=1 00:02:22.304 ++ NET_TYPE=virt 00:02:22.304 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.304 ++ RUN_NIGHTLY=1 00:02:22.304 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:22.304 + [[ -n '' ]] 00:02:22.304 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:22.304 + for M in /var/spdk/build-*-manifest.txt 00:02:22.304 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:22.304 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.304 + for M in /var/spdk/build-*-manifest.txt 00:02:22.304 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:22.304 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.304 + for M in /var/spdk/build-*-manifest.txt 00:02:22.304 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:22.304 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.304 ++ uname 00:02:22.304 + [[ Linux == \L\i\n\u\x ]] 00:02:22.304 + sudo dmesg -T 00:02:22.304 + sudo dmesg --clear 00:02:22.304 + dmesg_pid=5254 00:02:22.304 + sudo dmesg -Tw 00:02:22.304 + [[ Fedora Linux == FreeBSD ]] 00:02:22.304 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:22.304 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:22.304 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:22.304 + [[ -x /usr/src/fio-static/fio ]] 00:02:22.304 + export FIO_BIN=/usr/src/fio-static/fio 00:02:22.304 + FIO_BIN=/usr/src/fio-static/fio 00:02:22.304 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:22.304 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:22.304 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:22.304 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:22.304 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:22.304 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:22.304 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:22.304 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:22.304 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:22.304 Test configuration: 00:02:22.304 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.304 SPDK_TEST_NVMF=1 00:02:22.304 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:22.304 SPDK_TEST_URING=1 00:02:22.304 SPDK_TEST_VFIOUSER=1 00:02:22.304 SPDK_TEST_USDT=1 00:02:22.304 SPDK_RUN_ASAN=1 00:02:22.304 SPDK_RUN_UBSAN=1 00:02:22.304 NET_TYPE=virt 00:02:22.304 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.563 RUN_NIGHTLY=1 08:40:00 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:22.563 08:40:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:22.563 08:40:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:22.563 08:40:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:22.563 08:40:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:22.563 08:40:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:22.563 08:40:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.563 08:40:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.564 08:40:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.564 08:40:00 -- paths/export.sh@5 -- $ export PATH 00:02:22.564 08:40:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:22.564 08:40:00 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:22.564 08:40:00 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:22.564 08:40:00 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727512800.XXXXXX 00:02:22.564 08:40:00 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727512800.avJkwP 00:02:22.564 08:40:00 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:22.564 08:40:00 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:02:22.564 08:40:00 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:22.564 08:40:00 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:22.564 08:40:00 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:22.564 08:40:00 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:22.564 08:40:00 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:22.564 08:40:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:22.564 08:40:00 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:02:22.564 08:40:00 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:22.564 08:40:00 -- pm/common@17 -- $ local monitor 00:02:22.564 08:40:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.564 08:40:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:22.564 08:40:00 -- pm/common@25 -- $ sleep 1 00:02:22.564 08:40:00 -- pm/common@21 -- $ date +%s 00:02:22.564 08:40:00 -- pm/common@21 -- $ date +%s 00:02:22.564 08:40:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727512800 00:02:22.564 08:40:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727512800 00:02:22.564 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727512800_collect-cpu-load.pm.log 00:02:22.564 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727512800_collect-vmstat.pm.log 00:02:23.500 08:40:01 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:23.500 08:40:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:23.500 08:40:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:23.500 08:40:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:23.500 08:40:01 -- spdk/autobuild.sh@16 -- $ date -u 00:02:23.500 Sat Sep 28 08:40:01 AM UTC 2024 00:02:23.500 08:40:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:23.500 v25.01-pre-17-g09cc66129 00:02:23.500 08:40:01 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:23.500 08:40:01 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:23.500 08:40:01 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:23.500 08:40:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:23.500 08:40:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.500 ************************************ 00:02:23.500 START TEST asan 00:02:23.500 ************************************ 00:02:23.500 using asan 00:02:23.500 08:40:01 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:23.500 00:02:23.500 real 0m0.000s 00:02:23.500 user 0m0.000s 00:02:23.500 sys 0m0.000s 00:02:23.500 08:40:01 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:23.500 ************************************ 00:02:23.500 END TEST asan 00:02:23.500 08:40:01 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:23.500 ************************************ 00:02:23.500 08:40:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:23.500 08:40:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:23.500 08:40:01 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:23.500 08:40:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:23.500 08:40:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.500 ************************************ 00:02:23.500 START TEST ubsan 00:02:23.500 ************************************ 00:02:23.500 using ubsan 00:02:23.500 08:40:01 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:23.500 00:02:23.500 real 0m0.000s 00:02:23.500 user 0m0.000s 00:02:23.500 sys 0m0.000s 00:02:23.500 08:40:01 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:23.500 08:40:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:23.501 ************************************ 00:02:23.501 END TEST ubsan 00:02:23.501 ************************************ 00:02:23.501 08:40:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:23.501 08:40:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:23.501 08:40:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:23.501 08:40:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:23.501 08:40:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:23.501 08:40:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:23.501 08:40:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:23.501 08:40:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:23.501 08:40:01 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:02:23.759 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:23.759 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:24.327 Using 'verbs' RDMA provider 00:02:37.470 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:52.351 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:52.351 Creating mk/config.mk...done. 00:02:52.351 Creating mk/cc.flags.mk...done. 00:02:52.351 Type 'make' to build. 00:02:52.351 08:40:28 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:52.351 08:40:28 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:52.351 08:40:28 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:52.351 08:40:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.351 ************************************ 00:02:52.351 START TEST make 00:02:52.351 ************************************ 00:02:52.351 08:40:28 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:52.351 make[1]: Nothing to be done for 'all'. 00:02:52.351 The Meson build system 00:02:52.351 Version: 1.5.0 00:02:52.351 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:52.351 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:52.351 Build type: native build 00:02:52.351 Project name: libvfio-user 00:02:52.351 Project version: 0.0.1 00:02:52.351 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:52.351 C linker for the host machine: cc ld.bfd 2.40-14 00:02:52.351 Host machine cpu family: x86_64 00:02:52.351 Host machine cpu: x86_64 00:02:52.351 Run-time dependency threads found: YES 00:02:52.351 Library dl found: YES 00:02:52.351 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:52.351 Run-time dependency json-c found: YES 0.17 00:02:52.351 Run-time dependency cmocka found: YES 1.1.7 00:02:52.351 Program pytest-3 found: NO 00:02:52.351 Program flake8 found: NO 00:02:52.351 Program misspell-fixer found: NO 00:02:52.351 Program restructuredtext-lint found: NO 00:02:52.351 Program valgrind found: YES (/usr/bin/valgrind) 00:02:52.351 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:52.351 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:52.351 Compiler for C supports arguments -Wwrite-strings: YES 00:02:52.351 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:52.351 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:52.351 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:52.351 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:52.351 Build targets in project: 8 00:02:52.351 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:52.351 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:52.351 00:02:52.351 libvfio-user 0.0.1 00:02:52.351 00:02:52.351 User defined options 00:02:52.351 buildtype : debug 00:02:52.351 default_library: shared 00:02:52.351 libdir : /usr/local/lib 00:02:52.351 00:02:52.351 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.915 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:53.172 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:53.172 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:53.172 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:53.172 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:53.172 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:53.172 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:53.172 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:53.172 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:53.172 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:53.172 [10/37] Compiling C object samples/null.p/null.c.o 00:02:53.172 [11/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:53.172 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:53.172 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:53.172 [14/37] Compiling C object samples/server.p/server.c.o 00:02:53.172 [15/37] Compiling C object samples/client.p/client.c.o 00:02:53.172 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:53.172 [17/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:53.172 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:53.430 [19/37] Linking target lib/libvfio-user.so.0.0.1 00:02:53.430 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:53.430 [21/37] Linking target samples/client 00:02:53.430 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:53.430 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:53.430 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:53.430 [25/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:53.430 [26/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:53.430 [27/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:53.430 [28/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:53.430 [29/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:53.430 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:53.430 [31/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:53.430 [32/37] Linking target test/unit_tests 00:02:53.430 [33/37] Linking target samples/server 00:02:53.690 [34/37] Linking target samples/null 00:02:53.690 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:53.690 [36/37] Linking target samples/lspci 00:02:53.690 [37/37] Linking target samples/gpio-pci-idio-16 00:02:53.690 INFO: autodetecting backend as ninja 00:02:53.690 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:53.690 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:54.262 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:54.262 ninja: no work to do. 00:03:04.227 The Meson build system 00:03:04.227 Version: 1.5.0 00:03:04.227 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:04.227 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:04.227 Build type: native build 00:03:04.227 Program cat found: YES (/usr/bin/cat) 00:03:04.227 Project name: DPDK 00:03:04.227 Project version: 24.03.0 00:03:04.227 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:04.227 C linker for the host machine: cc ld.bfd 2.40-14 00:03:04.227 Host machine cpu family: x86_64 00:03:04.227 Host machine cpu: x86_64 00:03:04.227 Message: ## Building in Developer Mode ## 00:03:04.227 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:04.227 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:04.227 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:04.227 Program python3 found: YES (/usr/bin/python3) 00:03:04.227 Program cat found: YES (/usr/bin/cat) 00:03:04.227 Compiler for C supports arguments -march=native: YES 00:03:04.227 Checking for size of "void *" : 8 00:03:04.227 Checking for size of "void *" : 8 (cached) 00:03:04.227 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:04.227 Library m found: YES 00:03:04.227 Library numa found: YES 00:03:04.227 Has header "numaif.h" : YES 00:03:04.227 Library fdt found: NO 00:03:04.227 Library execinfo found: NO 00:03:04.227 Has header "execinfo.h" : YES 00:03:04.227 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:04.227 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:04.227 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:04.227 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:04.227 Run-time dependency openssl found: YES 3.1.1 00:03:04.227 Run-time dependency libpcap found: YES 1.10.4 00:03:04.227 Has header "pcap.h" with dependency libpcap: YES 00:03:04.227 Compiler for C supports arguments -Wcast-qual: YES 00:03:04.227 Compiler for C supports arguments -Wdeprecated: YES 00:03:04.227 Compiler for C supports arguments -Wformat: YES 00:03:04.227 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:04.227 Compiler for C supports arguments -Wformat-security: NO 00:03:04.227 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:04.227 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:04.227 Compiler for C supports arguments -Wnested-externs: YES 00:03:04.227 Compiler for C supports arguments -Wold-style-definition: YES 00:03:04.227 Compiler for C supports arguments -Wpointer-arith: YES 00:03:04.227 Compiler for C supports arguments -Wsign-compare: YES 00:03:04.227 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:04.227 Compiler for C supports arguments -Wundef: YES 00:03:04.227 Compiler for C supports arguments -Wwrite-strings: YES 00:03:04.227 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:04.227 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:04.227 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:04.227 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:04.227 Program objdump found: YES (/usr/bin/objdump) 00:03:04.227 Compiler for C supports arguments -mavx512f: YES 00:03:04.227 Checking if "AVX512 checking" compiles: YES 00:03:04.227 Fetching value of define "__SSE4_2__" : 1 00:03:04.227 Fetching value of define "__AES__" : 1 00:03:04.227 Fetching value of define "__AVX__" : 1 00:03:04.227 Fetching value of define "__AVX2__" : 1 00:03:04.227 Fetching value of define "__AVX512BW__" : (undefined) 00:03:04.227 Fetching value of define "__AVX512CD__" : (undefined) 00:03:04.227 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:04.227 Fetching value of define "__AVX512F__" : (undefined) 00:03:04.227 Fetching value of define "__AVX512VL__" : (undefined) 00:03:04.227 Fetching value of define "__PCLMUL__" : 1 00:03:04.227 Fetching value of define "__RDRND__" : 1 00:03:04.227 Fetching value of define "__RDSEED__" : 1 00:03:04.227 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:04.227 Fetching value of define "__znver1__" : (undefined) 00:03:04.227 Fetching value of define "__znver2__" : (undefined) 00:03:04.227 Fetching value of define "__znver3__" : (undefined) 00:03:04.227 Fetching value of define "__znver4__" : (undefined) 00:03:04.227 Library asan found: YES 00:03:04.227 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:04.227 Message: lib/log: Defining dependency "log" 00:03:04.227 Message: lib/kvargs: Defining dependency "kvargs" 00:03:04.227 Message: lib/telemetry: Defining dependency "telemetry" 00:03:04.227 Library rt found: YES 00:03:04.227 Checking for function "getentropy" : NO 00:03:04.227 Message: lib/eal: Defining dependency "eal" 00:03:04.227 Message: lib/ring: Defining dependency "ring" 00:03:04.227 Message: lib/rcu: Defining dependency "rcu" 00:03:04.227 Message: lib/mempool: Defining dependency "mempool" 00:03:04.227 Message: lib/mbuf: Defining dependency "mbuf" 00:03:04.227 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:04.227 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:04.227 Compiler for C supports arguments -mpclmul: YES 00:03:04.227 Compiler for C supports arguments -maes: YES 00:03:04.227 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:04.227 Compiler for C supports arguments -mavx512bw: YES 00:03:04.227 Compiler for C supports arguments -mavx512dq: YES 00:03:04.227 Compiler for C supports arguments -mavx512vl: YES 00:03:04.227 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:04.227 Compiler for C supports arguments -mavx2: YES 00:03:04.227 Compiler for C supports arguments -mavx: YES 00:03:04.227 Message: lib/net: Defining dependency "net" 00:03:04.227 Message: lib/meter: Defining dependency "meter" 00:03:04.227 Message: lib/ethdev: Defining dependency "ethdev" 00:03:04.227 Message: lib/pci: Defining dependency "pci" 00:03:04.227 Message: lib/cmdline: Defining dependency "cmdline" 00:03:04.227 Message: lib/hash: Defining dependency "hash" 00:03:04.227 Message: lib/timer: Defining dependency "timer" 00:03:04.227 Message: lib/compressdev: Defining dependency "compressdev" 00:03:04.227 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:04.228 Message: lib/dmadev: Defining dependency "dmadev" 00:03:04.228 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:04.228 Message: lib/power: Defining dependency "power" 00:03:04.228 Message: lib/reorder: Defining dependency "reorder" 00:03:04.228 Message: lib/security: Defining dependency "security" 00:03:04.228 Has header "linux/userfaultfd.h" : YES 00:03:04.228 Has header "linux/vduse.h" : YES 00:03:04.228 Message: lib/vhost: Defining dependency "vhost" 00:03:04.228 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:04.228 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:04.228 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:04.228 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:04.228 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:04.228 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:04.228 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:04.228 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:04.228 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:04.228 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:04.228 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:04.228 Configuring doxy-api-html.conf using configuration 00:03:04.228 Configuring doxy-api-man.conf using configuration 00:03:04.228 Program mandb found: YES (/usr/bin/mandb) 00:03:04.228 Program sphinx-build found: NO 00:03:04.228 Configuring rte_build_config.h using configuration 00:03:04.228 Message: 00:03:04.228 ================= 00:03:04.228 Applications Enabled 00:03:04.228 ================= 00:03:04.228 00:03:04.228 apps: 00:03:04.228 00:03:04.228 00:03:04.228 Message: 00:03:04.228 ================= 00:03:04.228 Libraries Enabled 00:03:04.228 ================= 00:03:04.228 00:03:04.228 libs: 00:03:04.228 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:04.228 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:04.228 cryptodev, dmadev, power, reorder, security, vhost, 00:03:04.228 00:03:04.228 Message: 00:03:04.228 =============== 00:03:04.228 Drivers Enabled 00:03:04.228 =============== 00:03:04.228 00:03:04.228 common: 00:03:04.228 00:03:04.228 bus: 00:03:04.228 pci, vdev, 00:03:04.228 mempool: 00:03:04.228 ring, 00:03:04.228 dma: 00:03:04.228 00:03:04.228 net: 00:03:04.228 00:03:04.228 crypto: 00:03:04.228 00:03:04.228 compress: 00:03:04.228 00:03:04.228 vdpa: 00:03:04.228 00:03:04.228 00:03:04.228 Message: 00:03:04.228 ================= 00:03:04.228 Content Skipped 00:03:04.228 ================= 00:03:04.228 00:03:04.228 apps: 00:03:04.228 dumpcap: explicitly disabled via build config 00:03:04.228 graph: explicitly disabled via build config 00:03:04.228 pdump: explicitly disabled via build config 00:03:04.228 proc-info: explicitly disabled via build config 00:03:04.228 test-acl: explicitly disabled via build config 00:03:04.228 test-bbdev: explicitly disabled via build config 00:03:04.228 test-cmdline: explicitly disabled via build config 00:03:04.228 test-compress-perf: explicitly disabled via build config 00:03:04.228 test-crypto-perf: explicitly disabled via build config 00:03:04.228 test-dma-perf: explicitly disabled via build config 00:03:04.228 test-eventdev: explicitly disabled via build config 00:03:04.228 test-fib: explicitly disabled via build config 00:03:04.228 test-flow-perf: explicitly disabled via build config 00:03:04.228 test-gpudev: explicitly disabled via build config 00:03:04.228 test-mldev: explicitly disabled via build config 00:03:04.228 test-pipeline: explicitly disabled via build config 00:03:04.228 test-pmd: explicitly disabled via build config 00:03:04.228 test-regex: explicitly disabled via build config 00:03:04.228 test-sad: explicitly disabled via build config 00:03:04.228 test-security-perf: explicitly disabled via build config 00:03:04.228 00:03:04.228 libs: 00:03:04.228 argparse: explicitly disabled via build config 00:03:04.228 metrics: explicitly disabled via build config 00:03:04.228 acl: explicitly disabled via build config 00:03:04.228 bbdev: explicitly disabled via build config 00:03:04.228 bitratestats: explicitly disabled via build config 00:03:04.228 bpf: explicitly disabled via build config 00:03:04.228 cfgfile: explicitly disabled via build config 00:03:04.228 distributor: explicitly disabled via build config 00:03:04.228 efd: explicitly disabled via build config 00:03:04.228 eventdev: explicitly disabled via build config 00:03:04.228 dispatcher: explicitly disabled via build config 00:03:04.228 gpudev: explicitly disabled via build config 00:03:04.228 gro: explicitly disabled via build config 00:03:04.228 gso: explicitly disabled via build config 00:03:04.228 ip_frag: explicitly disabled via build config 00:03:04.228 jobstats: explicitly disabled via build config 00:03:04.228 latencystats: explicitly disabled via build config 00:03:04.228 lpm: explicitly disabled via build config 00:03:04.228 member: explicitly disabled via build config 00:03:04.228 pcapng: explicitly disabled via build config 00:03:04.228 rawdev: explicitly disabled via build config 00:03:04.228 regexdev: explicitly disabled via build config 00:03:04.228 mldev: explicitly disabled via build config 00:03:04.228 rib: explicitly disabled via build config 00:03:04.228 sched: explicitly disabled via build config 00:03:04.228 stack: explicitly disabled via build config 00:03:04.228 ipsec: explicitly disabled via build config 00:03:04.228 pdcp: explicitly disabled via build config 00:03:04.228 fib: explicitly disabled via build config 00:03:04.228 port: explicitly disabled via build config 00:03:04.228 pdump: explicitly disabled via build config 00:03:04.228 table: explicitly disabled via build config 00:03:04.228 pipeline: explicitly disabled via build config 00:03:04.228 graph: explicitly disabled via build config 00:03:04.228 node: explicitly disabled via build config 00:03:04.228 00:03:04.228 drivers: 00:03:04.228 common/cpt: not in enabled drivers build config 00:03:04.228 common/dpaax: not in enabled drivers build config 00:03:04.228 common/iavf: not in enabled drivers build config 00:03:04.228 common/idpf: not in enabled drivers build config 00:03:04.228 common/ionic: not in enabled drivers build config 00:03:04.228 common/mvep: not in enabled drivers build config 00:03:04.228 common/octeontx: not in enabled drivers build config 00:03:04.228 bus/auxiliary: not in enabled drivers build config 00:03:04.228 bus/cdx: not in enabled drivers build config 00:03:04.228 bus/dpaa: not in enabled drivers build config 00:03:04.228 bus/fslmc: not in enabled drivers build config 00:03:04.228 bus/ifpga: not in enabled drivers build config 00:03:04.228 bus/platform: not in enabled drivers build config 00:03:04.228 bus/uacce: not in enabled drivers build config 00:03:04.228 bus/vmbus: not in enabled drivers build config 00:03:04.228 common/cnxk: not in enabled drivers build config 00:03:04.228 common/mlx5: not in enabled drivers build config 00:03:04.228 common/nfp: not in enabled drivers build config 00:03:04.228 common/nitrox: not in enabled drivers build config 00:03:04.228 common/qat: not in enabled drivers build config 00:03:04.228 common/sfc_efx: not in enabled drivers build config 00:03:04.228 mempool/bucket: not in enabled drivers build config 00:03:04.228 mempool/cnxk: not in enabled drivers build config 00:03:04.228 mempool/dpaa: not in enabled drivers build config 00:03:04.228 mempool/dpaa2: not in enabled drivers build config 00:03:04.228 mempool/octeontx: not in enabled drivers build config 00:03:04.228 mempool/stack: not in enabled drivers build config 00:03:04.228 dma/cnxk: not in enabled drivers build config 00:03:04.228 dma/dpaa: not in enabled drivers build config 00:03:04.228 dma/dpaa2: not in enabled drivers build config 00:03:04.228 dma/hisilicon: not in enabled drivers build config 00:03:04.228 dma/idxd: not in enabled drivers build config 00:03:04.228 dma/ioat: not in enabled drivers build config 00:03:04.228 dma/skeleton: not in enabled drivers build config 00:03:04.228 net/af_packet: not in enabled drivers build config 00:03:04.228 net/af_xdp: not in enabled drivers build config 00:03:04.228 net/ark: not in enabled drivers build config 00:03:04.228 net/atlantic: not in enabled drivers build config 00:03:04.228 net/avp: not in enabled drivers build config 00:03:04.228 net/axgbe: not in enabled drivers build config 00:03:04.228 net/bnx2x: not in enabled drivers build config 00:03:04.228 net/bnxt: not in enabled drivers build config 00:03:04.228 net/bonding: not in enabled drivers build config 00:03:04.228 net/cnxk: not in enabled drivers build config 00:03:04.228 net/cpfl: not in enabled drivers build config 00:03:04.228 net/cxgbe: not in enabled drivers build config 00:03:04.228 net/dpaa: not in enabled drivers build config 00:03:04.228 net/dpaa2: not in enabled drivers build config 00:03:04.228 net/e1000: not in enabled drivers build config 00:03:04.228 net/ena: not in enabled drivers build config 00:03:04.228 net/enetc: not in enabled drivers build config 00:03:04.228 net/enetfec: not in enabled drivers build config 00:03:04.228 net/enic: not in enabled drivers build config 00:03:04.228 net/failsafe: not in enabled drivers build config 00:03:04.228 net/fm10k: not in enabled drivers build config 00:03:04.228 net/gve: not in enabled drivers build config 00:03:04.228 net/hinic: not in enabled drivers build config 00:03:04.228 net/hns3: not in enabled drivers build config 00:03:04.228 net/i40e: not in enabled drivers build config 00:03:04.228 net/iavf: not in enabled drivers build config 00:03:04.228 net/ice: not in enabled drivers build config 00:03:04.228 net/idpf: not in enabled drivers build config 00:03:04.228 net/igc: not in enabled drivers build config 00:03:04.228 net/ionic: not in enabled drivers build config 00:03:04.228 net/ipn3ke: not in enabled drivers build config 00:03:04.228 net/ixgbe: not in enabled drivers build config 00:03:04.228 net/mana: not in enabled drivers build config 00:03:04.228 net/memif: not in enabled drivers build config 00:03:04.228 net/mlx4: not in enabled drivers build config 00:03:04.228 net/mlx5: not in enabled drivers build config 00:03:04.228 net/mvneta: not in enabled drivers build config 00:03:04.228 net/mvpp2: not in enabled drivers build config 00:03:04.228 net/netvsc: not in enabled drivers build config 00:03:04.228 net/nfb: not in enabled drivers build config 00:03:04.228 net/nfp: not in enabled drivers build config 00:03:04.228 net/ngbe: not in enabled drivers build config 00:03:04.228 net/null: not in enabled drivers build config 00:03:04.228 net/octeontx: not in enabled drivers build config 00:03:04.228 net/octeon_ep: not in enabled drivers build config 00:03:04.228 net/pcap: not in enabled drivers build config 00:03:04.229 net/pfe: not in enabled drivers build config 00:03:04.229 net/qede: not in enabled drivers build config 00:03:04.229 net/ring: not in enabled drivers build config 00:03:04.229 net/sfc: not in enabled drivers build config 00:03:04.229 net/softnic: not in enabled drivers build config 00:03:04.229 net/tap: not in enabled drivers build config 00:03:04.229 net/thunderx: not in enabled drivers build config 00:03:04.229 net/txgbe: not in enabled drivers build config 00:03:04.229 net/vdev_netvsc: not in enabled drivers build config 00:03:04.229 net/vhost: not in enabled drivers build config 00:03:04.229 net/virtio: not in enabled drivers build config 00:03:04.229 net/vmxnet3: not in enabled drivers build config 00:03:04.229 raw/*: missing internal dependency, "rawdev" 00:03:04.229 crypto/armv8: not in enabled drivers build config 00:03:04.229 crypto/bcmfs: not in enabled drivers build config 00:03:04.229 crypto/caam_jr: not in enabled drivers build config 00:03:04.229 crypto/ccp: not in enabled drivers build config 00:03:04.229 crypto/cnxk: not in enabled drivers build config 00:03:04.229 crypto/dpaa_sec: not in enabled drivers build config 00:03:04.229 crypto/dpaa2_sec: not in enabled drivers build config 00:03:04.229 crypto/ipsec_mb: not in enabled drivers build config 00:03:04.229 crypto/mlx5: not in enabled drivers build config 00:03:04.229 crypto/mvsam: not in enabled drivers build config 00:03:04.229 crypto/nitrox: not in enabled drivers build config 00:03:04.229 crypto/null: not in enabled drivers build config 00:03:04.229 crypto/octeontx: not in enabled drivers build config 00:03:04.229 crypto/openssl: not in enabled drivers build config 00:03:04.229 crypto/scheduler: not in enabled drivers build config 00:03:04.229 crypto/uadk: not in enabled drivers build config 00:03:04.229 crypto/virtio: not in enabled drivers build config 00:03:04.229 compress/isal: not in enabled drivers build config 00:03:04.229 compress/mlx5: not in enabled drivers build config 00:03:04.229 compress/nitrox: not in enabled drivers build config 00:03:04.229 compress/octeontx: not in enabled drivers build config 00:03:04.229 compress/zlib: not in enabled drivers build config 00:03:04.229 regex/*: missing internal dependency, "regexdev" 00:03:04.229 ml/*: missing internal dependency, "mldev" 00:03:04.229 vdpa/ifc: not in enabled drivers build config 00:03:04.229 vdpa/mlx5: not in enabled drivers build config 00:03:04.229 vdpa/nfp: not in enabled drivers build config 00:03:04.229 vdpa/sfc: not in enabled drivers build config 00:03:04.229 event/*: missing internal dependency, "eventdev" 00:03:04.229 baseband/*: missing internal dependency, "bbdev" 00:03:04.229 gpu/*: missing internal dependency, "gpudev" 00:03:04.229 00:03:04.229 00:03:04.488 Build targets in project: 85 00:03:04.488 00:03:04.488 DPDK 24.03.0 00:03:04.488 00:03:04.488 User defined options 00:03:04.488 buildtype : debug 00:03:04.488 default_library : shared 00:03:04.488 libdir : lib 00:03:04.488 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:04.488 b_sanitize : address 00:03:04.488 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:04.488 c_link_args : 00:03:04.488 cpu_instruction_set: native 00:03:04.488 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:04.488 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:04.488 enable_docs : false 00:03:04.488 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:04.488 enable_kmods : false 00:03:04.488 max_lcores : 128 00:03:04.488 tests : false 00:03:04.488 00:03:04.488 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:05.055 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:05.055 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:05.055 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:05.055 [3/268] Linking static target lib/librte_kvargs.a 00:03:05.055 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:05.055 [5/268] Linking static target lib/librte_log.a 00:03:05.055 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:05.679 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.679 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:05.679 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:05.938 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:05.938 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:05.938 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:05.938 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:05.938 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:05.938 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.938 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:06.196 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:06.196 [18/268] Linking target lib/librte_log.so.24.1 00:03:06.196 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:06.196 [20/268] Linking static target lib/librte_telemetry.a 00:03:06.454 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:06.454 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:06.454 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:06.712 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:06.970 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:06.970 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:06.970 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:06.970 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:06.970 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:06.971 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:07.229 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.229 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:07.229 [33/268] Linking target lib/librte_telemetry.so.24.1 00:03:07.229 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:07.229 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:07.487 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:07.487 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:07.746 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:07.746 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:07.746 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:07.746 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:08.005 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:08.005 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:08.005 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:08.005 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:08.263 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:08.263 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:08.263 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:08.521 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:08.521 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:08.779 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:08.779 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:09.037 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:09.037 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:09.037 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:09.037 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:09.037 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:09.296 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:09.296 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:09.296 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:09.554 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:09.812 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:09.812 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:09.812 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:09.812 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:09.812 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:10.070 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:10.070 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:10.329 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:10.329 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:10.329 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:10.329 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:10.587 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:10.587 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:10.587 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:10.845 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:10.845 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:10.845 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:10.845 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:10.845 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:10.845 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:11.103 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:11.103 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:11.361 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:11.361 [85/268] Linking static target lib/librte_ring.a 00:03:11.361 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:11.361 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:11.619 [88/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:11.619 [89/268] Linking static target lib/librte_eal.a 00:03:11.619 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:11.619 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:11.619 [92/268] Linking static target lib/librte_rcu.a 00:03:11.619 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:11.619 [94/268] Linking static target lib/librte_mempool.a 00:03:11.619 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:11.878 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.136 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:12.136 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:12.136 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.136 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:12.136 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:12.394 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:12.394 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:12.652 [104/268] Linking static target lib/librte_mbuf.a 00:03:12.652 [105/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:12.652 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:12.652 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:12.910 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:12.910 [109/268] Linking static target lib/librte_meter.a 00:03:12.910 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.168 [111/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:13.168 [112/268] Linking static target lib/librte_net.a 00:03:13.168 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:13.168 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.426 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:13.426 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:13.426 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:13.690 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.690 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.960 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:13.960 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:14.218 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:14.785 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:14.785 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:14.785 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:14.785 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:14.785 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:14.785 [128/268] Linking static target lib/librte_pci.a 00:03:15.044 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:15.044 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:15.044 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:15.044 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:15.044 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:15.044 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:15.303 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:15.303 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:15.303 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:15.303 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.303 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:15.303 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:15.303 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:15.303 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:15.562 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:15.562 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:15.562 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:15.562 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:15.562 [147/268] Linking static target lib/librte_cmdline.a 00:03:16.129 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:16.129 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:16.129 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:16.129 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:16.129 [152/268] Linking static target lib/librte_ethdev.a 00:03:16.387 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:16.646 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:16.646 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:16.646 [156/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:16.646 [157/268] Linking static target lib/librte_timer.a 00:03:16.646 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:16.904 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:16.904 [160/268] Linking static target lib/librte_compressdev.a 00:03:16.904 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:16.904 [162/268] Linking static target lib/librte_hash.a 00:03:17.162 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:17.162 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:17.420 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:17.420 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.420 [167/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.420 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:17.420 [169/268] Linking static target lib/librte_dmadev.a 00:03:17.420 [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:17.420 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:17.990 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:17.990 [173/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.990 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:17.990 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:18.250 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.250 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:18.508 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.508 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:18.508 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:18.509 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:18.767 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:18.767 [183/268] Linking static target lib/librte_cryptodev.a 00:03:18.767 [184/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:19.025 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:19.025 [186/268] Linking static target lib/librte_power.a 00:03:19.283 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:19.283 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:19.283 [189/268] Linking static target lib/librte_reorder.a 00:03:19.283 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:19.542 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:19.542 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:19.542 [193/268] Linking static target lib/librte_security.a 00:03:19.800 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.058 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.316 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:20.316 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.316 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:20.574 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:20.832 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:20.832 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:21.090 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:21.090 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:21.090 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:21.348 [205/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.348 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:21.606 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:21.606 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:21.606 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:21.865 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:21.865 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:22.122 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:22.122 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:22.122 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:22.122 [215/268] Linking static target drivers/librte_bus_vdev.a 00:03:22.122 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:22.122 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:22.122 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:22.122 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:22.123 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:22.123 [221/268] Linking static target drivers/librte_bus_pci.a 00:03:22.392 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:22.392 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:22.392 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:22.392 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:22.392 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.677 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.243 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:23.501 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.501 [230/268] Linking target lib/librte_eal.so.24.1 00:03:23.758 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:23.758 [232/268] Linking target lib/librte_ring.so.24.1 00:03:23.758 [233/268] Linking target lib/librte_pci.so.24.1 00:03:23.758 [234/268] Linking target lib/librte_dmadev.so.24.1 00:03:23.758 [235/268] Linking target lib/librte_meter.so.24.1 00:03:23.758 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:23.758 [237/268] Linking target lib/librte_timer.so.24.1 00:03:23.758 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:23.758 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:23.758 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:24.016 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:24.016 [242/268] Linking target lib/librte_mempool.so.24.1 00:03:24.016 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:24.016 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:24.016 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:24.016 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:24.016 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:24.016 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:24.016 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:24.274 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:24.274 [251/268] Linking target lib/librte_compressdev.so.24.1 00:03:24.274 [252/268] Linking target lib/librte_net.so.24.1 00:03:24.274 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:03:24.274 [254/268] Linking target lib/librte_reorder.so.24.1 00:03:24.532 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:24.532 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:24.532 [257/268] Linking target lib/librte_hash.so.24.1 00:03:24.532 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:24.532 [259/268] Linking target lib/librte_security.so.24.1 00:03:24.532 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.532 [261/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:24.791 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:24.791 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:24.791 [264/268] Linking target lib/librte_power.so.24.1 00:03:28.080 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:28.080 [266/268] Linking static target lib/librte_vhost.a 00:03:29.455 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.455 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:29.455 INFO: autodetecting backend as ninja 00:03:29.455 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:51.393 CC lib/ut/ut.o 00:03:51.393 CC lib/ut_mock/mock.o 00:03:51.393 CC lib/log/log.o 00:03:51.393 CC lib/log/log_flags.o 00:03:51.393 CC lib/log/log_deprecated.o 00:03:51.393 LIB libspdk_ut.a 00:03:51.393 LIB libspdk_ut_mock.a 00:03:51.393 SO libspdk_ut.so.2.0 00:03:51.393 SO libspdk_ut_mock.so.6.0 00:03:51.393 LIB libspdk_log.a 00:03:51.393 SO libspdk_log.so.7.0 00:03:51.393 SYMLINK libspdk_ut.so 00:03:51.393 SYMLINK libspdk_ut_mock.so 00:03:51.393 SYMLINK libspdk_log.so 00:03:51.393 CXX lib/trace_parser/trace.o 00:03:51.393 CC lib/ioat/ioat.o 00:03:51.393 CC lib/dma/dma.o 00:03:51.393 CC lib/util/base64.o 00:03:51.393 CC lib/util/bit_array.o 00:03:51.393 CC lib/util/cpuset.o 00:03:51.393 CC lib/util/crc32c.o 00:03:51.393 CC lib/util/crc16.o 00:03:51.393 CC lib/util/crc32.o 00:03:51.393 CC lib/vfio_user/host/vfio_user_pci.o 00:03:51.393 CC lib/util/crc32_ieee.o 00:03:51.393 CC lib/util/crc64.o 00:03:51.393 CC lib/util/dif.o 00:03:51.393 CC lib/util/fd.o 00:03:51.393 CC lib/util/fd_group.o 00:03:51.393 LIB libspdk_dma.a 00:03:51.393 SO libspdk_dma.so.5.0 00:03:51.393 CC lib/util/file.o 00:03:51.393 CC lib/util/hexlify.o 00:03:51.393 CC lib/vfio_user/host/vfio_user.o 00:03:51.393 SYMLINK libspdk_dma.so 00:03:51.393 CC lib/util/iov.o 00:03:51.393 LIB libspdk_ioat.a 00:03:51.393 CC lib/util/math.o 00:03:51.393 SO libspdk_ioat.so.7.0 00:03:51.393 CC lib/util/net.o 00:03:51.393 CC lib/util/pipe.o 00:03:51.393 CC lib/util/strerror_tls.o 00:03:51.393 SYMLINK libspdk_ioat.so 00:03:51.393 CC lib/util/string.o 00:03:51.393 CC lib/util/uuid.o 00:03:51.393 CC lib/util/xor.o 00:03:51.393 CC lib/util/zipf.o 00:03:51.393 LIB libspdk_vfio_user.a 00:03:51.393 CC lib/util/md5.o 00:03:51.393 SO libspdk_vfio_user.so.5.0 00:03:51.393 SYMLINK libspdk_vfio_user.so 00:03:51.393 LIB libspdk_util.a 00:03:51.393 SO libspdk_util.so.10.0 00:03:51.393 LIB libspdk_trace_parser.a 00:03:51.393 SYMLINK libspdk_util.so 00:03:51.393 SO libspdk_trace_parser.so.6.0 00:03:51.393 SYMLINK libspdk_trace_parser.so 00:03:51.393 CC lib/vmd/vmd.o 00:03:51.393 CC lib/env_dpdk/env.o 00:03:51.393 CC lib/vmd/led.o 00:03:51.393 CC lib/env_dpdk/memory.o 00:03:51.393 CC lib/json/json_parse.o 00:03:51.393 CC lib/json/json_util.o 00:03:51.393 CC lib/idxd/idxd.o 00:03:51.393 CC lib/rdma_provider/common.o 00:03:51.393 CC lib/rdma_utils/rdma_utils.o 00:03:51.393 CC lib/conf/conf.o 00:03:51.393 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:51.393 CC lib/env_dpdk/pci.o 00:03:51.393 CC lib/json/json_write.o 00:03:51.393 CC lib/idxd/idxd_user.o 00:03:51.393 LIB libspdk_conf.a 00:03:51.393 SO libspdk_conf.so.6.0 00:03:51.393 LIB libspdk_rdma_utils.a 00:03:51.393 SO libspdk_rdma_utils.so.1.0 00:03:51.393 SYMLINK libspdk_conf.so 00:03:51.393 LIB libspdk_rdma_provider.a 00:03:51.393 CC lib/env_dpdk/init.o 00:03:51.393 SYMLINK libspdk_rdma_utils.so 00:03:51.393 CC lib/env_dpdk/threads.o 00:03:51.393 SO libspdk_rdma_provider.so.6.0 00:03:51.393 SYMLINK libspdk_rdma_provider.so 00:03:51.393 CC lib/env_dpdk/pci_ioat.o 00:03:51.393 CC lib/idxd/idxd_kernel.o 00:03:51.393 CC lib/env_dpdk/pci_virtio.o 00:03:51.393 LIB libspdk_json.a 00:03:51.393 CC lib/env_dpdk/pci_vmd.o 00:03:51.393 CC lib/env_dpdk/pci_idxd.o 00:03:51.393 SO libspdk_json.so.6.0 00:03:51.393 SYMLINK libspdk_json.so 00:03:51.393 CC lib/env_dpdk/pci_event.o 00:03:51.393 CC lib/env_dpdk/sigbus_handler.o 00:03:51.393 CC lib/env_dpdk/pci_dpdk.o 00:03:51.393 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:51.393 LIB libspdk_idxd.a 00:03:51.393 SO libspdk_idxd.so.12.1 00:03:51.393 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:51.393 LIB libspdk_vmd.a 00:03:51.393 SO libspdk_vmd.so.6.0 00:03:51.393 SYMLINK libspdk_idxd.so 00:03:51.393 CC lib/jsonrpc/jsonrpc_server.o 00:03:51.393 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:51.393 CC lib/jsonrpc/jsonrpc_client.o 00:03:51.393 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:51.393 SYMLINK libspdk_vmd.so 00:03:51.393 LIB libspdk_jsonrpc.a 00:03:51.393 SO libspdk_jsonrpc.so.6.0 00:03:51.393 SYMLINK libspdk_jsonrpc.so 00:03:51.393 CC lib/rpc/rpc.o 00:03:51.652 LIB libspdk_env_dpdk.a 00:03:51.652 LIB libspdk_rpc.a 00:03:51.652 SO libspdk_env_dpdk.so.15.0 00:03:51.652 SO libspdk_rpc.so.6.0 00:03:51.652 SYMLINK libspdk_rpc.so 00:03:51.910 SYMLINK libspdk_env_dpdk.so 00:03:51.910 CC lib/trace/trace.o 00:03:51.910 CC lib/trace/trace_flags.o 00:03:51.910 CC lib/trace/trace_rpc.o 00:03:51.910 CC lib/notify/notify.o 00:03:51.910 CC lib/notify/notify_rpc.o 00:03:51.910 CC lib/keyring/keyring.o 00:03:51.910 CC lib/keyring/keyring_rpc.o 00:03:52.169 LIB libspdk_notify.a 00:03:52.169 SO libspdk_notify.so.6.0 00:03:52.169 LIB libspdk_keyring.a 00:03:52.169 SYMLINK libspdk_notify.so 00:03:52.169 SO libspdk_keyring.so.2.0 00:03:52.427 LIB libspdk_trace.a 00:03:52.427 SYMLINK libspdk_keyring.so 00:03:52.427 SO libspdk_trace.so.11.0 00:03:52.427 SYMLINK libspdk_trace.so 00:03:52.686 CC lib/sock/sock_rpc.o 00:03:52.686 CC lib/sock/sock.o 00:03:52.686 CC lib/thread/iobuf.o 00:03:52.686 CC lib/thread/thread.o 00:03:53.254 LIB libspdk_sock.a 00:03:53.254 SO libspdk_sock.so.10.0 00:03:53.254 SYMLINK libspdk_sock.so 00:03:53.819 CC lib/nvme/nvme_ctrlr.o 00:03:53.819 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:53.819 CC lib/nvme/nvme_fabric.o 00:03:53.819 CC lib/nvme/nvme_ns_cmd.o 00:03:53.819 CC lib/nvme/nvme_ns.o 00:03:53.819 CC lib/nvme/nvme_pcie.o 00:03:53.819 CC lib/nvme/nvme_pcie_common.o 00:03:53.819 CC lib/nvme/nvme_qpair.o 00:03:53.819 CC lib/nvme/nvme.o 00:03:54.386 CC lib/nvme/nvme_quirks.o 00:03:54.644 CC lib/nvme/nvme_transport.o 00:03:54.644 CC lib/nvme/nvme_discovery.o 00:03:54.644 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:54.644 LIB libspdk_thread.a 00:03:54.644 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:54.644 SO libspdk_thread.so.10.1 00:03:54.902 CC lib/nvme/nvme_tcp.o 00:03:54.902 SYMLINK libspdk_thread.so 00:03:54.902 CC lib/nvme/nvme_opal.o 00:03:54.902 CC lib/nvme/nvme_io_msg.o 00:03:54.902 CC lib/nvme/nvme_poll_group.o 00:03:55.160 CC lib/nvme/nvme_zns.o 00:03:55.418 CC lib/nvme/nvme_stubs.o 00:03:55.418 CC lib/blob/blobstore.o 00:03:55.418 CC lib/accel/accel.o 00:03:55.419 CC lib/nvme/nvme_auth.o 00:03:55.419 CC lib/blob/request.o 00:03:55.677 CC lib/init/json_config.o 00:03:55.677 CC lib/init/subsystem.o 00:03:55.935 CC lib/nvme/nvme_cuse.o 00:03:55.935 CC lib/nvme/nvme_vfio_user.o 00:03:55.935 CC lib/nvme/nvme_rdma.o 00:03:55.935 CC lib/init/subsystem_rpc.o 00:03:55.935 CC lib/init/rpc.o 00:03:56.194 CC lib/accel/accel_rpc.o 00:03:56.194 LIB libspdk_init.a 00:03:56.194 SO libspdk_init.so.6.0 00:03:56.194 SYMLINK libspdk_init.so 00:03:56.194 CC lib/accel/accel_sw.o 00:03:56.452 CC lib/virtio/virtio.o 00:03:56.452 CC lib/blob/zeroes.o 00:03:56.710 CC lib/virtio/virtio_vhost_user.o 00:03:56.710 CC lib/vfu_tgt/tgt_endpoint.o 00:03:56.710 CC lib/vfu_tgt/tgt_rpc.o 00:03:56.710 CC lib/virtio/virtio_vfio_user.o 00:03:56.710 CC lib/blob/blob_bs_dev.o 00:03:56.710 LIB libspdk_accel.a 00:03:56.969 CC lib/virtio/virtio_pci.o 00:03:56.969 SO libspdk_accel.so.16.0 00:03:56.969 SYMLINK libspdk_accel.so 00:03:56.969 LIB libspdk_vfu_tgt.a 00:03:56.969 CC lib/fsdev/fsdev.o 00:03:56.969 CC lib/fsdev/fsdev_rpc.o 00:03:56.969 CC lib/fsdev/fsdev_io.o 00:03:56.969 CC lib/event/app.o 00:03:56.969 CC lib/event/reactor.o 00:03:57.232 SO libspdk_vfu_tgt.so.3.0 00:03:57.232 CC lib/bdev/bdev.o 00:03:57.232 SYMLINK libspdk_vfu_tgt.so 00:03:57.232 CC lib/bdev/bdev_rpc.o 00:03:57.232 CC lib/event/log_rpc.o 00:03:57.232 LIB libspdk_virtio.a 00:03:57.232 SO libspdk_virtio.so.7.0 00:03:57.509 SYMLINK libspdk_virtio.so 00:03:57.509 CC lib/event/app_rpc.o 00:03:57.509 CC lib/event/scheduler_static.o 00:03:57.509 CC lib/bdev/bdev_zone.o 00:03:57.509 CC lib/bdev/part.o 00:03:57.509 CC lib/bdev/scsi_nvme.o 00:03:57.509 LIB libspdk_nvme.a 00:03:57.768 LIB libspdk_event.a 00:03:57.768 SO libspdk_event.so.14.0 00:03:57.768 SO libspdk_nvme.so.14.0 00:03:57.768 SYMLINK libspdk_event.so 00:03:57.768 LIB libspdk_fsdev.a 00:03:58.026 SO libspdk_fsdev.so.1.0 00:03:58.026 SYMLINK libspdk_fsdev.so 00:03:58.026 SYMLINK libspdk_nvme.so 00:03:58.283 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:59.219 LIB libspdk_fuse_dispatcher.a 00:03:59.219 SO libspdk_fuse_dispatcher.so.1.0 00:03:59.219 SYMLINK libspdk_fuse_dispatcher.so 00:03:59.478 LIB libspdk_blob.a 00:03:59.478 SO libspdk_blob.so.11.0 00:03:59.737 SYMLINK libspdk_blob.so 00:03:59.996 CC lib/blobfs/blobfs.o 00:03:59.996 CC lib/blobfs/tree.o 00:03:59.996 CC lib/lvol/lvol.o 00:04:00.562 LIB libspdk_bdev.a 00:04:00.821 SO libspdk_bdev.so.16.0 00:04:00.821 SYMLINK libspdk_bdev.so 00:04:01.079 LIB libspdk_blobfs.a 00:04:01.079 SO libspdk_blobfs.so.10.0 00:04:01.079 CC lib/nvmf/ctrlr.o 00:04:01.079 CC lib/nvmf/ctrlr_discovery.o 00:04:01.079 CC lib/nvmf/ctrlr_bdev.o 00:04:01.079 CC lib/nvmf/subsystem.o 00:04:01.079 CC lib/ftl/ftl_core.o 00:04:01.079 CC lib/nbd/nbd.o 00:04:01.079 CC lib/scsi/dev.o 00:04:01.079 CC lib/ublk/ublk.o 00:04:01.079 SYMLINK libspdk_blobfs.so 00:04:01.079 CC lib/ftl/ftl_init.o 00:04:01.338 LIB libspdk_lvol.a 00:04:01.338 SO libspdk_lvol.so.10.0 00:04:01.338 SYMLINK libspdk_lvol.so 00:04:01.338 CC lib/scsi/lun.o 00:04:01.338 CC lib/nbd/nbd_rpc.o 00:04:01.338 CC lib/scsi/port.o 00:04:01.597 CC lib/ftl/ftl_layout.o 00:04:01.597 CC lib/scsi/scsi.o 00:04:01.597 CC lib/ftl/ftl_debug.o 00:04:01.597 LIB libspdk_nbd.a 00:04:01.597 SO libspdk_nbd.so.7.0 00:04:01.597 CC lib/nvmf/nvmf.o 00:04:01.597 CC lib/nvmf/nvmf_rpc.o 00:04:01.855 SYMLINK libspdk_nbd.so 00:04:01.855 CC lib/nvmf/transport.o 00:04:01.855 CC lib/scsi/scsi_bdev.o 00:04:01.855 CC lib/nvmf/tcp.o 00:04:01.855 CC lib/ublk/ublk_rpc.o 00:04:02.113 CC lib/nvmf/stubs.o 00:04:02.114 CC lib/ftl/ftl_io.o 00:04:02.114 LIB libspdk_ublk.a 00:04:02.114 SO libspdk_ublk.so.3.0 00:04:02.372 SYMLINK libspdk_ublk.so 00:04:02.372 CC lib/ftl/ftl_sb.o 00:04:02.372 CC lib/scsi/scsi_pr.o 00:04:02.372 CC lib/nvmf/mdns_server.o 00:04:02.630 CC lib/ftl/ftl_l2p.o 00:04:02.630 CC lib/ftl/ftl_l2p_flat.o 00:04:02.630 CC lib/nvmf/vfio_user.o 00:04:02.630 CC lib/scsi/scsi_rpc.o 00:04:02.630 CC lib/scsi/task.o 00:04:02.630 CC lib/nvmf/rdma.o 00:04:02.630 CC lib/ftl/ftl_nv_cache.o 00:04:02.888 CC lib/nvmf/auth.o 00:04:02.888 CC lib/ftl/ftl_band.o 00:04:02.888 CC lib/ftl/ftl_band_ops.o 00:04:02.888 CC lib/ftl/ftl_writer.o 00:04:02.888 LIB libspdk_scsi.a 00:04:03.146 SO libspdk_scsi.so.9.0 00:04:03.146 SYMLINK libspdk_scsi.so 00:04:03.146 CC lib/ftl/ftl_rq.o 00:04:03.146 CC lib/ftl/ftl_reloc.o 00:04:03.146 CC lib/ftl/ftl_l2p_cache.o 00:04:03.405 CC lib/ftl/ftl_p2l.o 00:04:03.405 CC lib/ftl/ftl_p2l_log.o 00:04:03.405 CC lib/ftl/mngt/ftl_mngt.o 00:04:03.665 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:03.665 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:03.665 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:03.925 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:03.925 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:03.925 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:03.925 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:03.925 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:04.184 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:04.184 CC lib/iscsi/conn.o 00:04:04.184 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:04.184 CC lib/vhost/vhost.o 00:04:04.184 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:04.184 CC lib/vhost/vhost_rpc.o 00:04:04.184 CC lib/vhost/vhost_scsi.o 00:04:04.184 CC lib/vhost/vhost_blk.o 00:04:04.442 CC lib/vhost/rte_vhost_user.o 00:04:04.442 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:04.700 CC lib/iscsi/init_grp.o 00:04:04.700 CC lib/ftl/utils/ftl_conf.o 00:04:04.700 CC lib/ftl/utils/ftl_md.o 00:04:04.959 CC lib/iscsi/iscsi.o 00:04:04.959 CC lib/iscsi/param.o 00:04:04.959 CC lib/iscsi/portal_grp.o 00:04:04.959 CC lib/iscsi/tgt_node.o 00:04:04.959 CC lib/iscsi/iscsi_subsystem.o 00:04:05.217 CC lib/iscsi/iscsi_rpc.o 00:04:05.217 CC lib/ftl/utils/ftl_mempool.o 00:04:05.217 CC lib/ftl/utils/ftl_bitmap.o 00:04:05.475 CC lib/iscsi/task.o 00:04:05.475 CC lib/ftl/utils/ftl_property.o 00:04:05.475 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:05.475 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:05.475 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:05.475 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:05.475 LIB libspdk_nvmf.a 00:04:05.475 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:05.734 LIB libspdk_vhost.a 00:04:05.734 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:05.734 SO libspdk_nvmf.so.19.0 00:04:05.734 SO libspdk_vhost.so.8.0 00:04:05.734 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:05.734 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:05.734 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:05.734 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:05.734 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:05.734 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:05.992 SYMLINK libspdk_vhost.so 00:04:05.992 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:05.992 CC lib/ftl/base/ftl_base_dev.o 00:04:05.992 CC lib/ftl/base/ftl_base_bdev.o 00:04:05.992 CC lib/ftl/ftl_trace.o 00:04:05.992 SYMLINK libspdk_nvmf.so 00:04:06.250 LIB libspdk_ftl.a 00:04:06.508 SO libspdk_ftl.so.9.0 00:04:06.766 LIB libspdk_iscsi.a 00:04:06.766 SO libspdk_iscsi.so.8.0 00:04:06.766 SYMLINK libspdk_ftl.so 00:04:07.024 SYMLINK libspdk_iscsi.so 00:04:07.282 CC module/env_dpdk/env_dpdk_rpc.o 00:04:07.282 CC module/vfu_device/vfu_virtio.o 00:04:07.282 CC module/fsdev/aio/fsdev_aio.o 00:04:07.282 CC module/blob/bdev/blob_bdev.o 00:04:07.282 CC module/sock/posix/posix.o 00:04:07.282 CC module/keyring/file/keyring.o 00:04:07.282 CC module/accel/dsa/accel_dsa.o 00:04:07.282 CC module/accel/error/accel_error.o 00:04:07.282 CC module/accel/ioat/accel_ioat.o 00:04:07.541 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:07.541 LIB libspdk_env_dpdk_rpc.a 00:04:07.541 SO libspdk_env_dpdk_rpc.so.6.0 00:04:07.541 SYMLINK libspdk_env_dpdk_rpc.so 00:04:07.541 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:07.541 CC module/keyring/file/keyring_rpc.o 00:04:07.541 CC module/accel/error/accel_error_rpc.o 00:04:07.541 CC module/accel/ioat/accel_ioat_rpc.o 00:04:07.799 LIB libspdk_scheduler_dynamic.a 00:04:07.799 SO libspdk_scheduler_dynamic.so.4.0 00:04:07.799 LIB libspdk_keyring_file.a 00:04:07.799 LIB libspdk_blob_bdev.a 00:04:07.799 SO libspdk_keyring_file.so.2.0 00:04:07.799 SYMLINK libspdk_scheduler_dynamic.so 00:04:07.799 CC module/accel/dsa/accel_dsa_rpc.o 00:04:07.799 SO libspdk_blob_bdev.so.11.0 00:04:07.799 LIB libspdk_accel_ioat.a 00:04:07.799 LIB libspdk_accel_error.a 00:04:07.799 SO libspdk_accel_ioat.so.6.0 00:04:07.799 SO libspdk_accel_error.so.2.0 00:04:07.799 SYMLINK libspdk_keyring_file.so 00:04:07.799 SYMLINK libspdk_blob_bdev.so 00:04:07.799 CC module/fsdev/aio/linux_aio_mgr.o 00:04:07.799 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:07.799 SYMLINK libspdk_accel_error.so 00:04:07.799 SYMLINK libspdk_accel_ioat.so 00:04:07.799 LIB libspdk_accel_dsa.a 00:04:08.055 CC module/keyring/linux/keyring.o 00:04:08.055 SO libspdk_accel_dsa.so.5.0 00:04:08.055 CC module/sock/uring/uring.o 00:04:08.055 SYMLINK libspdk_accel_dsa.so 00:04:08.055 CC module/vfu_device/vfu_virtio_blk.o 00:04:08.055 LIB libspdk_scheduler_dpdk_governor.a 00:04:08.055 CC module/accel/iaa/accel_iaa.o 00:04:08.055 CC module/vfu_device/vfu_virtio_scsi.o 00:04:08.055 CC module/scheduler/gscheduler/gscheduler.o 00:04:08.055 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:08.055 CC module/keyring/linux/keyring_rpc.o 00:04:08.313 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:08.313 LIB libspdk_fsdev_aio.a 00:04:08.313 SO libspdk_fsdev_aio.so.1.0 00:04:08.313 LIB libspdk_scheduler_gscheduler.a 00:04:08.313 LIB libspdk_keyring_linux.a 00:04:08.313 CC module/accel/iaa/accel_iaa_rpc.o 00:04:08.313 SO libspdk_scheduler_gscheduler.so.4.0 00:04:08.313 LIB libspdk_sock_posix.a 00:04:08.313 SO libspdk_keyring_linux.so.1.0 00:04:08.313 SYMLINK libspdk_fsdev_aio.so 00:04:08.313 SO libspdk_sock_posix.so.6.0 00:04:08.313 CC module/vfu_device/vfu_virtio_rpc.o 00:04:08.313 SYMLINK libspdk_scheduler_gscheduler.so 00:04:08.313 SYMLINK libspdk_keyring_linux.so 00:04:08.313 CC module/vfu_device/vfu_virtio_fs.o 00:04:08.570 SYMLINK libspdk_sock_posix.so 00:04:08.570 CC module/bdev/delay/vbdev_delay.o 00:04:08.570 LIB libspdk_accel_iaa.a 00:04:08.570 SO libspdk_accel_iaa.so.3.0 00:04:08.570 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:08.570 CC module/blobfs/bdev/blobfs_bdev.o 00:04:08.570 SYMLINK libspdk_accel_iaa.so 00:04:08.570 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:08.570 CC module/bdev/error/vbdev_error.o 00:04:08.570 CC module/bdev/gpt/gpt.o 00:04:08.570 CC module/bdev/gpt/vbdev_gpt.o 00:04:08.570 CC module/bdev/lvol/vbdev_lvol.o 00:04:08.828 LIB libspdk_vfu_device.a 00:04:08.828 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:08.828 CC module/bdev/error/vbdev_error_rpc.o 00:04:08.828 SO libspdk_vfu_device.so.3.0 00:04:08.828 LIB libspdk_blobfs_bdev.a 00:04:08.828 SO libspdk_blobfs_bdev.so.6.0 00:04:08.828 SYMLINK libspdk_vfu_device.so 00:04:08.828 SYMLINK libspdk_blobfs_bdev.so 00:04:08.828 LIB libspdk_bdev_delay.a 00:04:08.828 LIB libspdk_bdev_error.a 00:04:09.087 LIB libspdk_bdev_gpt.a 00:04:09.087 SO libspdk_bdev_error.so.6.0 00:04:09.087 SO libspdk_bdev_delay.so.6.0 00:04:09.087 LIB libspdk_sock_uring.a 00:04:09.087 SO libspdk_bdev_gpt.so.6.0 00:04:09.087 SO libspdk_sock_uring.so.5.0 00:04:09.087 CC module/bdev/malloc/bdev_malloc.o 00:04:09.087 CC module/bdev/null/bdev_null.o 00:04:09.087 SYMLINK libspdk_bdev_error.so 00:04:09.087 SYMLINK libspdk_bdev_delay.so 00:04:09.087 CC module/bdev/null/bdev_null_rpc.o 00:04:09.087 SYMLINK libspdk_bdev_gpt.so 00:04:09.087 SYMLINK libspdk_sock_uring.so 00:04:09.087 CC module/bdev/nvme/bdev_nvme.o 00:04:09.087 CC module/bdev/passthru/vbdev_passthru.o 00:04:09.345 CC module/bdev/split/vbdev_split.o 00:04:09.345 CC module/bdev/raid/bdev_raid.o 00:04:09.345 CC module/bdev/raid/bdev_raid_rpc.o 00:04:09.345 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:09.345 LIB libspdk_bdev_lvol.a 00:04:09.345 SO libspdk_bdev_lvol.so.6.0 00:04:09.345 CC module/bdev/uring/bdev_uring.o 00:04:09.345 LIB libspdk_bdev_null.a 00:04:09.345 SYMLINK libspdk_bdev_lvol.so 00:04:09.345 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:09.345 SO libspdk_bdev_null.so.6.0 00:04:09.345 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:09.603 SYMLINK libspdk_bdev_null.so 00:04:09.603 CC module/bdev/raid/bdev_raid_sb.o 00:04:09.603 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:09.603 CC module/bdev/split/vbdev_split_rpc.o 00:04:09.603 CC module/bdev/raid/raid0.o 00:04:09.603 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:09.603 LIB libspdk_bdev_passthru.a 00:04:09.603 LIB libspdk_bdev_malloc.a 00:04:09.603 LIB libspdk_bdev_zone_block.a 00:04:09.603 SO libspdk_bdev_passthru.so.6.0 00:04:09.603 LIB libspdk_bdev_split.a 00:04:09.862 SO libspdk_bdev_malloc.so.6.0 00:04:09.862 SO libspdk_bdev_zone_block.so.6.0 00:04:09.862 SO libspdk_bdev_split.so.6.0 00:04:09.862 SYMLINK libspdk_bdev_malloc.so 00:04:09.862 CC module/bdev/uring/bdev_uring_rpc.o 00:04:09.862 SYMLINK libspdk_bdev_passthru.so 00:04:09.862 SYMLINK libspdk_bdev_zone_block.so 00:04:09.862 SYMLINK libspdk_bdev_split.so 00:04:09.862 CC module/bdev/nvme/nvme_rpc.o 00:04:09.862 CC module/bdev/nvme/bdev_mdns_client.o 00:04:09.862 CC module/bdev/aio/bdev_aio.o 00:04:10.120 CC module/bdev/ftl/bdev_ftl.o 00:04:10.120 CC module/bdev/iscsi/bdev_iscsi.o 00:04:10.120 LIB libspdk_bdev_uring.a 00:04:10.120 CC module/bdev/nvme/vbdev_opal.o 00:04:10.120 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:10.120 SO libspdk_bdev_uring.so.6.0 00:04:10.120 SYMLINK libspdk_bdev_uring.so 00:04:10.120 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:10.120 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:10.379 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:10.379 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:10.379 CC module/bdev/aio/bdev_aio_rpc.o 00:04:10.379 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:10.379 CC module/bdev/raid/raid1.o 00:04:10.379 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:10.379 CC module/bdev/raid/concat.o 00:04:10.638 LIB libspdk_bdev_iscsi.a 00:04:10.638 SO libspdk_bdev_iscsi.so.6.0 00:04:10.638 LIB libspdk_bdev_aio.a 00:04:10.638 LIB libspdk_bdev_ftl.a 00:04:10.638 SYMLINK libspdk_bdev_iscsi.so 00:04:10.638 SO libspdk_bdev_aio.so.6.0 00:04:10.638 SO libspdk_bdev_ftl.so.6.0 00:04:10.638 SYMLINK libspdk_bdev_aio.so 00:04:10.638 SYMLINK libspdk_bdev_ftl.so 00:04:10.638 LIB libspdk_bdev_virtio.a 00:04:10.896 SO libspdk_bdev_virtio.so.6.0 00:04:10.896 LIB libspdk_bdev_raid.a 00:04:10.896 SYMLINK libspdk_bdev_virtio.so 00:04:10.896 SO libspdk_bdev_raid.so.6.0 00:04:10.896 SYMLINK libspdk_bdev_raid.so 00:04:11.833 LIB libspdk_bdev_nvme.a 00:04:12.091 SO libspdk_bdev_nvme.so.7.0 00:04:12.091 SYMLINK libspdk_bdev_nvme.so 00:04:12.658 CC module/event/subsystems/scheduler/scheduler.o 00:04:12.658 CC module/event/subsystems/keyring/keyring.o 00:04:12.658 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:12.658 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:12.658 CC module/event/subsystems/fsdev/fsdev.o 00:04:12.658 CC module/event/subsystems/iobuf/iobuf.o 00:04:12.658 CC module/event/subsystems/sock/sock.o 00:04:12.658 CC module/event/subsystems/vmd/vmd.o 00:04:12.658 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:12.658 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:12.658 LIB libspdk_event_keyring.a 00:04:12.658 SO libspdk_event_keyring.so.1.0 00:04:12.658 LIB libspdk_event_scheduler.a 00:04:12.658 LIB libspdk_event_vfu_tgt.a 00:04:12.658 LIB libspdk_event_vhost_blk.a 00:04:12.658 LIB libspdk_event_fsdev.a 00:04:12.658 LIB libspdk_event_sock.a 00:04:12.658 LIB libspdk_event_vmd.a 00:04:12.658 SO libspdk_event_scheduler.so.4.0 00:04:12.658 SO libspdk_event_vhost_blk.so.3.0 00:04:12.658 SO libspdk_event_fsdev.so.1.0 00:04:12.658 LIB libspdk_event_iobuf.a 00:04:12.658 SO libspdk_event_vfu_tgt.so.3.0 00:04:12.927 SO libspdk_event_sock.so.5.0 00:04:12.927 SYMLINK libspdk_event_keyring.so 00:04:12.927 SO libspdk_event_vmd.so.6.0 00:04:12.927 SO libspdk_event_iobuf.so.3.0 00:04:12.927 SYMLINK libspdk_event_scheduler.so 00:04:12.927 SYMLINK libspdk_event_fsdev.so 00:04:12.927 SYMLINK libspdk_event_vfu_tgt.so 00:04:12.927 SYMLINK libspdk_event_vhost_blk.so 00:04:12.927 SYMLINK libspdk_event_sock.so 00:04:12.927 SYMLINK libspdk_event_iobuf.so 00:04:12.927 SYMLINK libspdk_event_vmd.so 00:04:13.203 CC module/event/subsystems/accel/accel.o 00:04:13.203 LIB libspdk_event_accel.a 00:04:13.203 SO libspdk_event_accel.so.6.0 00:04:13.462 SYMLINK libspdk_event_accel.so 00:04:13.720 CC module/event/subsystems/bdev/bdev.o 00:04:13.979 LIB libspdk_event_bdev.a 00:04:13.979 SO libspdk_event_bdev.so.6.0 00:04:13.979 SYMLINK libspdk_event_bdev.so 00:04:14.237 CC module/event/subsystems/ublk/ublk.o 00:04:14.237 CC module/event/subsystems/scsi/scsi.o 00:04:14.237 CC module/event/subsystems/nbd/nbd.o 00:04:14.237 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:14.237 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:14.495 LIB libspdk_event_ublk.a 00:04:14.495 LIB libspdk_event_nbd.a 00:04:14.495 LIB libspdk_event_scsi.a 00:04:14.495 SO libspdk_event_ublk.so.3.0 00:04:14.495 SO libspdk_event_nbd.so.6.0 00:04:14.495 SO libspdk_event_scsi.so.6.0 00:04:14.495 SYMLINK libspdk_event_ublk.so 00:04:14.495 SYMLINK libspdk_event_nbd.so 00:04:14.495 SYMLINK libspdk_event_scsi.so 00:04:14.495 LIB libspdk_event_nvmf.a 00:04:14.495 SO libspdk_event_nvmf.so.6.0 00:04:14.753 SYMLINK libspdk_event_nvmf.so 00:04:14.753 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:14.753 CC module/event/subsystems/iscsi/iscsi.o 00:04:15.012 LIB libspdk_event_vhost_scsi.a 00:04:15.012 LIB libspdk_event_iscsi.a 00:04:15.012 SO libspdk_event_vhost_scsi.so.3.0 00:04:15.012 SO libspdk_event_iscsi.so.6.0 00:04:15.012 SYMLINK libspdk_event_vhost_scsi.so 00:04:15.012 SYMLINK libspdk_event_iscsi.so 00:04:15.270 SO libspdk.so.6.0 00:04:15.270 SYMLINK libspdk.so 00:04:15.529 CC app/trace_record/trace_record.o 00:04:15.529 CXX app/trace/trace.o 00:04:15.529 CC app/spdk_lspci/spdk_lspci.o 00:04:15.529 CC app/nvmf_tgt/nvmf_main.o 00:04:15.529 CC app/iscsi_tgt/iscsi_tgt.o 00:04:15.529 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:15.529 CC examples/util/zipf/zipf.o 00:04:15.529 CC app/spdk_tgt/spdk_tgt.o 00:04:15.529 CC examples/ioat/perf/perf.o 00:04:15.529 CC test/thread/poller_perf/poller_perf.o 00:04:15.786 LINK spdk_lspci 00:04:15.786 LINK zipf 00:04:15.786 LINK interrupt_tgt 00:04:15.786 LINK nvmf_tgt 00:04:15.786 LINK spdk_trace_record 00:04:15.786 LINK poller_perf 00:04:15.786 LINK iscsi_tgt 00:04:15.786 LINK spdk_tgt 00:04:15.786 LINK ioat_perf 00:04:16.043 CC app/spdk_nvme_perf/perf.o 00:04:16.043 LINK spdk_trace 00:04:16.044 CC app/spdk_nvme_identify/identify.o 00:04:16.044 TEST_HEADER include/spdk/accel.h 00:04:16.044 TEST_HEADER include/spdk/accel_module.h 00:04:16.044 TEST_HEADER include/spdk/assert.h 00:04:16.044 TEST_HEADER include/spdk/barrier.h 00:04:16.044 TEST_HEADER include/spdk/base64.h 00:04:16.044 TEST_HEADER include/spdk/bdev.h 00:04:16.044 TEST_HEADER include/spdk/bdev_module.h 00:04:16.044 TEST_HEADER include/spdk/bdev_zone.h 00:04:16.044 TEST_HEADER include/spdk/bit_array.h 00:04:16.044 TEST_HEADER include/spdk/bit_pool.h 00:04:16.044 TEST_HEADER include/spdk/blob_bdev.h 00:04:16.044 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:16.044 TEST_HEADER include/spdk/blobfs.h 00:04:16.044 TEST_HEADER include/spdk/blob.h 00:04:16.044 TEST_HEADER include/spdk/conf.h 00:04:16.044 TEST_HEADER include/spdk/config.h 00:04:16.044 CC app/spdk_nvme_discover/discovery_aer.o 00:04:16.044 TEST_HEADER include/spdk/cpuset.h 00:04:16.044 TEST_HEADER include/spdk/crc16.h 00:04:16.044 TEST_HEADER include/spdk/crc32.h 00:04:16.044 TEST_HEADER include/spdk/crc64.h 00:04:16.044 TEST_HEADER include/spdk/dif.h 00:04:16.044 TEST_HEADER include/spdk/dma.h 00:04:16.044 TEST_HEADER include/spdk/endian.h 00:04:16.044 TEST_HEADER include/spdk/env_dpdk.h 00:04:16.044 TEST_HEADER include/spdk/env.h 00:04:16.044 TEST_HEADER include/spdk/event.h 00:04:16.044 CC examples/ioat/verify/verify.o 00:04:16.044 TEST_HEADER include/spdk/fd_group.h 00:04:16.044 TEST_HEADER include/spdk/fd.h 00:04:16.044 TEST_HEADER include/spdk/file.h 00:04:16.044 TEST_HEADER include/spdk/fsdev.h 00:04:16.044 CC app/spdk_top/spdk_top.o 00:04:16.044 TEST_HEADER include/spdk/fsdev_module.h 00:04:16.044 TEST_HEADER include/spdk/ftl.h 00:04:16.044 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:16.044 TEST_HEADER include/spdk/gpt_spec.h 00:04:16.044 TEST_HEADER include/spdk/hexlify.h 00:04:16.044 TEST_HEADER include/spdk/histogram_data.h 00:04:16.044 TEST_HEADER include/spdk/idxd.h 00:04:16.044 TEST_HEADER include/spdk/idxd_spec.h 00:04:16.044 TEST_HEADER include/spdk/init.h 00:04:16.044 TEST_HEADER include/spdk/ioat.h 00:04:16.044 TEST_HEADER include/spdk/ioat_spec.h 00:04:16.044 TEST_HEADER include/spdk/iscsi_spec.h 00:04:16.044 TEST_HEADER include/spdk/json.h 00:04:16.044 TEST_HEADER include/spdk/jsonrpc.h 00:04:16.301 TEST_HEADER include/spdk/keyring.h 00:04:16.301 TEST_HEADER include/spdk/keyring_module.h 00:04:16.301 TEST_HEADER include/spdk/likely.h 00:04:16.301 TEST_HEADER include/spdk/log.h 00:04:16.301 TEST_HEADER include/spdk/lvol.h 00:04:16.301 TEST_HEADER include/spdk/md5.h 00:04:16.301 CC test/dma/test_dma/test_dma.o 00:04:16.301 TEST_HEADER include/spdk/memory.h 00:04:16.301 TEST_HEADER include/spdk/mmio.h 00:04:16.301 TEST_HEADER include/spdk/nbd.h 00:04:16.301 TEST_HEADER include/spdk/net.h 00:04:16.301 TEST_HEADER include/spdk/notify.h 00:04:16.301 TEST_HEADER include/spdk/nvme.h 00:04:16.301 TEST_HEADER include/spdk/nvme_intel.h 00:04:16.301 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:16.301 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:16.302 TEST_HEADER include/spdk/nvme_spec.h 00:04:16.302 TEST_HEADER include/spdk/nvme_zns.h 00:04:16.302 CC test/app/bdev_svc/bdev_svc.o 00:04:16.302 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:16.302 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:16.302 TEST_HEADER include/spdk/nvmf.h 00:04:16.302 TEST_HEADER include/spdk/nvmf_spec.h 00:04:16.302 TEST_HEADER include/spdk/nvmf_transport.h 00:04:16.302 TEST_HEADER include/spdk/opal.h 00:04:16.302 TEST_HEADER include/spdk/opal_spec.h 00:04:16.302 TEST_HEADER include/spdk/pci_ids.h 00:04:16.302 TEST_HEADER include/spdk/pipe.h 00:04:16.302 CC app/spdk_dd/spdk_dd.o 00:04:16.302 TEST_HEADER include/spdk/queue.h 00:04:16.302 TEST_HEADER include/spdk/reduce.h 00:04:16.302 TEST_HEADER include/spdk/rpc.h 00:04:16.302 TEST_HEADER include/spdk/scheduler.h 00:04:16.302 TEST_HEADER include/spdk/scsi.h 00:04:16.302 TEST_HEADER include/spdk/scsi_spec.h 00:04:16.302 TEST_HEADER include/spdk/sock.h 00:04:16.302 TEST_HEADER include/spdk/stdinc.h 00:04:16.302 TEST_HEADER include/spdk/string.h 00:04:16.302 TEST_HEADER include/spdk/thread.h 00:04:16.302 TEST_HEADER include/spdk/trace.h 00:04:16.302 TEST_HEADER include/spdk/trace_parser.h 00:04:16.302 TEST_HEADER include/spdk/tree.h 00:04:16.302 TEST_HEADER include/spdk/ublk.h 00:04:16.302 TEST_HEADER include/spdk/util.h 00:04:16.302 TEST_HEADER include/spdk/uuid.h 00:04:16.302 TEST_HEADER include/spdk/version.h 00:04:16.302 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:16.302 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:16.302 TEST_HEADER include/spdk/vhost.h 00:04:16.302 TEST_HEADER include/spdk/vmd.h 00:04:16.302 TEST_HEADER include/spdk/xor.h 00:04:16.302 TEST_HEADER include/spdk/zipf.h 00:04:16.302 CXX test/cpp_headers/accel.o 00:04:16.302 LINK spdk_nvme_discover 00:04:16.559 LINK verify 00:04:16.559 CC test/env/mem_callbacks/mem_callbacks.o 00:04:16.559 LINK bdev_svc 00:04:16.559 CXX test/cpp_headers/accel_module.o 00:04:16.817 CXX test/cpp_headers/assert.o 00:04:16.817 CC app/fio/nvme/fio_plugin.o 00:04:16.817 LINK test_dma 00:04:16.817 LINK spdk_dd 00:04:16.817 CC examples/thread/thread/thread_ex.o 00:04:16.817 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:16.817 CXX test/cpp_headers/barrier.o 00:04:17.074 LINK spdk_nvme_identify 00:04:17.075 CXX test/cpp_headers/base64.o 00:04:17.075 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:17.075 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:17.075 LINK mem_callbacks 00:04:17.075 LINK thread 00:04:17.075 LINK spdk_nvme_perf 00:04:17.333 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:17.333 LINK spdk_top 00:04:17.333 CXX test/cpp_headers/bdev.o 00:04:17.333 CC test/env/vtophys/vtophys.o 00:04:17.333 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:17.333 CC test/env/memory/memory_ut.o 00:04:17.333 LINK nvme_fuzz 00:04:17.591 LINK spdk_nvme 00:04:17.591 CXX test/cpp_headers/bdev_module.o 00:04:17.591 LINK vtophys 00:04:17.591 CC test/env/pci/pci_ut.o 00:04:17.591 LINK env_dpdk_post_init 00:04:17.591 CC examples/sock/hello_world/hello_sock.o 00:04:17.591 CC test/app/histogram_perf/histogram_perf.o 00:04:17.591 CC app/fio/bdev/fio_plugin.o 00:04:17.849 CXX test/cpp_headers/bdev_zone.o 00:04:17.849 LINK vhost_fuzz 00:04:17.849 LINK histogram_perf 00:04:17.849 CC test/event/event_perf/event_perf.o 00:04:17.849 LINK hello_sock 00:04:17.849 CC test/nvme/aer/aer.o 00:04:17.849 CXX test/cpp_headers/bit_array.o 00:04:17.849 CXX test/cpp_headers/bit_pool.o 00:04:18.110 CXX test/cpp_headers/blob_bdev.o 00:04:18.110 LINK event_perf 00:04:18.110 LINK pci_ut 00:04:18.110 CXX test/cpp_headers/blobfs_bdev.o 00:04:18.110 CC examples/vmd/lsvmd/lsvmd.o 00:04:18.110 CC test/rpc_client/rpc_client_test.o 00:04:18.110 CC test/nvme/reset/reset.o 00:04:18.368 LINK aer 00:04:18.368 CC test/event/reactor/reactor.o 00:04:18.368 LINK spdk_bdev 00:04:18.368 LINK lsvmd 00:04:18.368 LINK reactor 00:04:18.368 LINK rpc_client_test 00:04:18.368 CXX test/cpp_headers/blobfs.o 00:04:18.626 LINK reset 00:04:18.626 CC test/accel/dif/dif.o 00:04:18.626 CXX test/cpp_headers/blob.o 00:04:18.626 CC app/vhost/vhost.o 00:04:18.626 CC examples/vmd/led/led.o 00:04:18.626 CC test/blobfs/mkfs/mkfs.o 00:04:18.626 CC test/event/reactor_perf/reactor_perf.o 00:04:18.626 CXX test/cpp_headers/conf.o 00:04:18.626 CC test/nvme/sgl/sgl.o 00:04:18.885 LINK memory_ut 00:04:18.885 LINK led 00:04:18.885 LINK vhost 00:04:18.885 LINK reactor_perf 00:04:18.885 CC test/app/jsoncat/jsoncat.o 00:04:18.885 CXX test/cpp_headers/config.o 00:04:18.885 LINK mkfs 00:04:18.885 CXX test/cpp_headers/cpuset.o 00:04:19.143 LINK jsoncat 00:04:19.143 CXX test/cpp_headers/crc16.o 00:04:19.143 LINK sgl 00:04:19.143 CC test/event/app_repeat/app_repeat.o 00:04:19.143 CC examples/idxd/perf/perf.o 00:04:19.143 CC test/nvme/e2edp/nvme_dp.o 00:04:19.143 CXX test/cpp_headers/crc32.o 00:04:19.143 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:19.402 CC test/nvme/overhead/overhead.o 00:04:19.402 CC examples/accel/perf/accel_perf.o 00:04:19.402 LINK app_repeat 00:04:19.402 LINK iscsi_fuzz 00:04:19.402 LINK dif 00:04:19.402 CXX test/cpp_headers/crc64.o 00:04:19.402 CC test/lvol/esnap/esnap.o 00:04:19.660 LINK nvme_dp 00:04:19.660 LINK idxd_perf 00:04:19.660 LINK hello_fsdev 00:04:19.660 CXX test/cpp_headers/dif.o 00:04:19.660 CC test/event/scheduler/scheduler.o 00:04:19.660 CC test/app/stub/stub.o 00:04:19.660 LINK overhead 00:04:19.660 CC test/nvme/err_injection/err_injection.o 00:04:19.660 CXX test/cpp_headers/dma.o 00:04:19.660 CXX test/cpp_headers/endian.o 00:04:19.918 LINK stub 00:04:19.918 LINK scheduler 00:04:19.918 CC test/nvme/startup/startup.o 00:04:19.918 CXX test/cpp_headers/env_dpdk.o 00:04:19.918 LINK accel_perf 00:04:19.918 LINK err_injection 00:04:19.918 CC examples/nvme/hello_world/hello_world.o 00:04:19.918 CC test/nvme/reserve/reserve.o 00:04:19.918 CC examples/blob/hello_world/hello_blob.o 00:04:19.918 CXX test/cpp_headers/env.o 00:04:20.176 CXX test/cpp_headers/event.o 00:04:20.176 CXX test/cpp_headers/fd_group.o 00:04:20.176 LINK startup 00:04:20.176 CXX test/cpp_headers/fd.o 00:04:20.176 LINK hello_world 00:04:20.176 LINK reserve 00:04:20.176 CXX test/cpp_headers/file.o 00:04:20.176 LINK hello_blob 00:04:20.434 CXX test/cpp_headers/fsdev.o 00:04:20.434 CC test/bdev/bdevio/bdevio.o 00:04:20.434 CC test/nvme/simple_copy/simple_copy.o 00:04:20.434 CC test/nvme/connect_stress/connect_stress.o 00:04:20.434 CC examples/bdev/hello_world/hello_bdev.o 00:04:20.434 CXX test/cpp_headers/fsdev_module.o 00:04:20.434 CC test/nvme/boot_partition/boot_partition.o 00:04:20.434 CC examples/nvme/reconnect/reconnect.o 00:04:20.692 CC test/nvme/compliance/nvme_compliance.o 00:04:20.692 CC examples/blob/cli/blobcli.o 00:04:20.692 LINK connect_stress 00:04:20.692 LINK simple_copy 00:04:20.692 CXX test/cpp_headers/ftl.o 00:04:20.692 LINK hello_bdev 00:04:20.692 LINK boot_partition 00:04:20.692 LINK bdevio 00:04:20.950 CC test/nvme/fused_ordering/fused_ordering.o 00:04:20.950 CXX test/cpp_headers/fuse_dispatcher.o 00:04:20.950 CXX test/cpp_headers/gpt_spec.o 00:04:20.950 LINK reconnect 00:04:20.950 CC examples/bdev/bdevperf/bdevperf.o 00:04:20.950 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:20.950 LINK nvme_compliance 00:04:21.209 CC test/nvme/fdp/fdp.o 00:04:21.209 CXX test/cpp_headers/hexlify.o 00:04:21.209 LINK fused_ordering 00:04:21.209 CC test/nvme/cuse/cuse.o 00:04:21.209 LINK doorbell_aers 00:04:21.209 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:21.209 LINK blobcli 00:04:21.209 CC examples/nvme/arbitration/arbitration.o 00:04:21.209 CXX test/cpp_headers/histogram_data.o 00:04:21.467 CXX test/cpp_headers/idxd.o 00:04:21.467 CC examples/nvme/hotplug/hotplug.o 00:04:21.467 LINK fdp 00:04:21.467 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:21.467 CXX test/cpp_headers/idxd_spec.o 00:04:21.725 CC examples/nvme/abort/abort.o 00:04:21.725 CXX test/cpp_headers/init.o 00:04:21.725 LINK hotplug 00:04:21.725 LINK arbitration 00:04:21.725 LINK cmb_copy 00:04:21.725 LINK nvme_manage 00:04:21.983 CXX test/cpp_headers/ioat.o 00:04:21.983 CXX test/cpp_headers/ioat_spec.o 00:04:21.983 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:21.983 CXX test/cpp_headers/iscsi_spec.o 00:04:21.983 CXX test/cpp_headers/json.o 00:04:21.983 CXX test/cpp_headers/jsonrpc.o 00:04:21.983 LINK bdevperf 00:04:21.983 CXX test/cpp_headers/keyring.o 00:04:21.983 CXX test/cpp_headers/keyring_module.o 00:04:21.983 CXX test/cpp_headers/likely.o 00:04:21.983 LINK pmr_persistence 00:04:21.983 LINK abort 00:04:22.241 CXX test/cpp_headers/log.o 00:04:22.241 CXX test/cpp_headers/lvol.o 00:04:22.241 CXX test/cpp_headers/md5.o 00:04:22.241 CXX test/cpp_headers/memory.o 00:04:22.241 CXX test/cpp_headers/mmio.o 00:04:22.241 CXX test/cpp_headers/nbd.o 00:04:22.241 CXX test/cpp_headers/net.o 00:04:22.241 CXX test/cpp_headers/notify.o 00:04:22.241 CXX test/cpp_headers/nvme.o 00:04:22.499 CXX test/cpp_headers/nvme_intel.o 00:04:22.499 CXX test/cpp_headers/nvme_ocssd.o 00:04:22.499 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:22.499 CXX test/cpp_headers/nvme_spec.o 00:04:22.499 CXX test/cpp_headers/nvme_zns.o 00:04:22.499 CXX test/cpp_headers/nvmf_cmd.o 00:04:22.499 CC examples/nvmf/nvmf/nvmf.o 00:04:22.499 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:22.499 CXX test/cpp_headers/nvmf.o 00:04:22.499 CXX test/cpp_headers/nvmf_spec.o 00:04:22.499 CXX test/cpp_headers/nvmf_transport.o 00:04:22.758 CXX test/cpp_headers/opal.o 00:04:22.758 CXX test/cpp_headers/opal_spec.o 00:04:22.758 CXX test/cpp_headers/pci_ids.o 00:04:22.758 CXX test/cpp_headers/pipe.o 00:04:22.758 CXX test/cpp_headers/queue.o 00:04:22.758 LINK cuse 00:04:22.758 CXX test/cpp_headers/reduce.o 00:04:22.758 CXX test/cpp_headers/rpc.o 00:04:22.758 CXX test/cpp_headers/scheduler.o 00:04:22.758 CXX test/cpp_headers/scsi.o 00:04:22.758 CXX test/cpp_headers/scsi_spec.o 00:04:22.758 LINK nvmf 00:04:23.016 CXX test/cpp_headers/sock.o 00:04:23.016 CXX test/cpp_headers/stdinc.o 00:04:23.016 CXX test/cpp_headers/string.o 00:04:23.016 CXX test/cpp_headers/thread.o 00:04:23.016 CXX test/cpp_headers/trace.o 00:04:23.016 CXX test/cpp_headers/trace_parser.o 00:04:23.016 CXX test/cpp_headers/tree.o 00:04:23.016 CXX test/cpp_headers/ublk.o 00:04:23.016 CXX test/cpp_headers/util.o 00:04:23.016 CXX test/cpp_headers/uuid.o 00:04:23.016 CXX test/cpp_headers/version.o 00:04:23.016 CXX test/cpp_headers/vfio_user_pci.o 00:04:23.016 CXX test/cpp_headers/vfio_user_spec.o 00:04:23.274 CXX test/cpp_headers/vhost.o 00:04:23.274 CXX test/cpp_headers/vmd.o 00:04:23.274 CXX test/cpp_headers/xor.o 00:04:23.274 CXX test/cpp_headers/zipf.o 00:04:26.562 LINK esnap 00:04:26.562 00:04:26.562 real 1m35.858s 00:04:26.562 user 9m17.443s 00:04:26.562 sys 1m38.208s 00:04:26.562 ************************************ 00:04:26.562 END TEST make 00:04:26.562 ************************************ 00:04:26.562 08:42:04 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:26.562 08:42:04 make -- common/autotest_common.sh@10 -- $ set +x 00:04:26.562 08:42:04 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:26.562 08:42:04 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:26.562 08:42:04 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:26.562 08:42:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.562 08:42:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:26.562 08:42:04 -- pm/common@44 -- $ pid=5285 00:04:26.562 08:42:04 -- pm/common@50 -- $ kill -TERM 5285 00:04:26.562 08:42:04 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.562 08:42:04 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:26.562 08:42:04 -- pm/common@44 -- $ pid=5286 00:04:26.562 08:42:04 -- pm/common@50 -- $ kill -TERM 5286 00:04:26.562 08:42:04 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:26.562 08:42:04 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:26.562 08:42:04 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:26.562 08:42:04 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:26.562 08:42:04 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.562 08:42:04 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.562 08:42:04 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.562 08:42:04 -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.562 08:42:04 -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.562 08:42:04 -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.562 08:42:04 -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.562 08:42:04 -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.562 08:42:04 -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.562 08:42:04 -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.562 08:42:04 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.562 08:42:04 -- scripts/common.sh@344 -- # case "$op" in 00:04:26.562 08:42:04 -- scripts/common.sh@345 -- # : 1 00:04:26.562 08:42:04 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.562 08:42:04 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.562 08:42:04 -- scripts/common.sh@365 -- # decimal 1 00:04:26.562 08:42:04 -- scripts/common.sh@353 -- # local d=1 00:04:26.562 08:42:04 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.562 08:42:04 -- scripts/common.sh@355 -- # echo 1 00:04:26.562 08:42:04 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.562 08:42:04 -- scripts/common.sh@366 -- # decimal 2 00:04:26.562 08:42:04 -- scripts/common.sh@353 -- # local d=2 00:04:26.562 08:42:04 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.562 08:42:04 -- scripts/common.sh@355 -- # echo 2 00:04:26.562 08:42:04 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.562 08:42:04 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.562 08:42:04 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.562 08:42:04 -- scripts/common.sh@368 -- # return 0 00:04:26.562 08:42:04 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.562 08:42:04 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:26.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.562 --rc genhtml_branch_coverage=1 00:04:26.562 --rc genhtml_function_coverage=1 00:04:26.562 --rc genhtml_legend=1 00:04:26.562 --rc geninfo_all_blocks=1 00:04:26.562 --rc geninfo_unexecuted_blocks=1 00:04:26.562 00:04:26.562 ' 00:04:26.562 08:42:04 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:26.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.562 --rc genhtml_branch_coverage=1 00:04:26.562 --rc genhtml_function_coverage=1 00:04:26.562 --rc genhtml_legend=1 00:04:26.562 --rc geninfo_all_blocks=1 00:04:26.562 --rc geninfo_unexecuted_blocks=1 00:04:26.562 00:04:26.562 ' 00:04:26.562 08:42:04 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:26.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.562 --rc genhtml_branch_coverage=1 00:04:26.562 --rc genhtml_function_coverage=1 00:04:26.562 --rc genhtml_legend=1 00:04:26.562 --rc geninfo_all_blocks=1 00:04:26.562 --rc geninfo_unexecuted_blocks=1 00:04:26.562 00:04:26.562 ' 00:04:26.562 08:42:04 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:26.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.562 --rc genhtml_branch_coverage=1 00:04:26.562 --rc genhtml_function_coverage=1 00:04:26.562 --rc genhtml_legend=1 00:04:26.562 --rc geninfo_all_blocks=1 00:04:26.562 --rc geninfo_unexecuted_blocks=1 00:04:26.562 00:04:26.562 ' 00:04:26.562 08:42:04 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:26.562 08:42:04 -- nvmf/common.sh@7 -- # uname -s 00:04:26.821 08:42:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.821 08:42:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.821 08:42:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.821 08:42:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.821 08:42:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.821 08:42:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.821 08:42:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.821 08:42:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.821 08:42:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.821 08:42:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.821 08:42:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:04:26.821 08:42:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:04:26.821 08:42:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.821 08:42:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.821 08:42:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:26.821 08:42:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:26.821 08:42:04 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:26.821 08:42:04 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:26.821 08:42:04 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:26.821 08:42:04 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.821 08:42:04 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.822 08:42:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.822 08:42:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.822 08:42:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.822 08:42:04 -- paths/export.sh@5 -- # export PATH 00:04:26.822 08:42:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.822 08:42:04 -- nvmf/common.sh@51 -- # : 0 00:04:26.822 08:42:04 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:26.822 08:42:04 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:26.822 08:42:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:26.822 08:42:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:26.822 08:42:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:26.822 08:42:04 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:26.822 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:26.822 08:42:04 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:26.822 08:42:04 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:26.822 08:42:04 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:26.822 08:42:04 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:26.822 08:42:04 -- spdk/autotest.sh@32 -- # uname -s 00:04:26.822 08:42:04 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:26.822 08:42:04 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:26.822 08:42:04 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:26.822 08:42:04 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:26.822 08:42:04 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:26.822 08:42:04 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:26.822 08:42:04 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:26.822 08:42:04 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:26.822 08:42:04 -- spdk/autotest.sh@48 -- # udevadm_pid=54989 00:04:26.822 08:42:04 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:26.822 08:42:04 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:26.822 08:42:04 -- pm/common@17 -- # local monitor 00:04:26.822 08:42:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.822 08:42:04 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.822 08:42:04 -- pm/common@25 -- # sleep 1 00:04:26.822 08:42:04 -- pm/common@21 -- # date +%s 00:04:26.822 08:42:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727512924 00:04:26.822 08:42:04 -- pm/common@21 -- # date +%s 00:04:26.822 08:42:04 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727512924 00:04:26.822 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727512924_collect-cpu-load.pm.log 00:04:26.822 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727512924_collect-vmstat.pm.log 00:04:27.758 08:42:05 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:27.759 08:42:05 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:27.759 08:42:05 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:27.759 08:42:05 -- common/autotest_common.sh@10 -- # set +x 00:04:27.759 08:42:05 -- spdk/autotest.sh@59 -- # create_test_list 00:04:27.759 08:42:05 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:27.759 08:42:05 -- common/autotest_common.sh@10 -- # set +x 00:04:27.759 08:42:05 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:27.759 08:42:05 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:27.759 08:42:05 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:27.759 08:42:05 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:27.759 08:42:05 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:27.759 08:42:05 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:27.759 08:42:05 -- common/autotest_common.sh@1455 -- # uname 00:04:27.759 08:42:05 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:27.759 08:42:05 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:27.759 08:42:05 -- common/autotest_common.sh@1475 -- # uname 00:04:27.759 08:42:05 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:27.759 08:42:05 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:27.759 08:42:05 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:28.025 lcov: LCOV version 1.15 00:04:28.025 08:42:05 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:42.943 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:42.943 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:01.034 08:42:36 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:01.034 08:42:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.034 08:42:36 -- common/autotest_common.sh@10 -- # set +x 00:05:01.034 08:42:36 -- spdk/autotest.sh@78 -- # rm -f 00:05:01.034 08:42:36 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:01.034 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.034 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:01.034 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:01.034 08:42:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:01.035 08:42:36 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:01.035 08:42:36 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:01.035 08:42:36 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:01.035 08:42:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:01.035 08:42:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:01.035 08:42:36 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:01.035 08:42:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:01.035 08:42:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:01.035 08:42:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:01.035 08:42:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:01.035 08:42:36 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:01.035 08:42:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:01.035 08:42:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:01.035 08:42:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:01.035 08:42:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:01.035 08:42:36 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:01.035 08:42:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:01.035 08:42:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:01.035 08:42:36 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:01.035 08:42:36 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:01.035 08:42:36 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:01.035 08:42:36 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:01.035 08:42:36 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:01.035 08:42:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:01.035 08:42:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.035 08:42:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.035 08:42:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:01.035 08:42:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:01.035 08:42:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:01.035 No valid GPT data, bailing 00:05:01.035 08:42:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:01.035 08:42:36 -- scripts/common.sh@394 -- # pt= 00:05:01.035 08:42:36 -- scripts/common.sh@395 -- # return 1 00:05:01.035 08:42:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:01.035 1+0 records in 00:05:01.035 1+0 records out 00:05:01.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460218 s, 228 MB/s 00:05:01.035 08:42:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.035 08:42:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.035 08:42:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:01.035 08:42:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:01.035 08:42:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:01.035 No valid GPT data, bailing 00:05:01.035 08:42:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:01.035 08:42:36 -- scripts/common.sh@394 -- # pt= 00:05:01.035 08:42:36 -- scripts/common.sh@395 -- # return 1 00:05:01.035 08:42:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:01.035 1+0 records in 00:05:01.035 1+0 records out 00:05:01.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00365548 s, 287 MB/s 00:05:01.035 08:42:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.035 08:42:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.035 08:42:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:01.035 08:42:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:01.035 08:42:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:01.035 No valid GPT data, bailing 00:05:01.035 08:42:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:01.035 08:42:37 -- scripts/common.sh@394 -- # pt= 00:05:01.035 08:42:37 -- scripts/common.sh@395 -- # return 1 00:05:01.035 08:42:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:01.035 1+0 records in 00:05:01.035 1+0 records out 00:05:01.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043869 s, 239 MB/s 00:05:01.035 08:42:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.035 08:42:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:01.035 08:42:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:01.035 08:42:37 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:01.035 08:42:37 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:01.035 No valid GPT data, bailing 00:05:01.035 08:42:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:01.035 08:42:37 -- scripts/common.sh@394 -- # pt= 00:05:01.035 08:42:37 -- scripts/common.sh@395 -- # return 1 00:05:01.035 08:42:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:01.035 1+0 records in 00:05:01.035 1+0 records out 00:05:01.035 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0045465 s, 231 MB/s 00:05:01.035 08:42:37 -- spdk/autotest.sh@105 -- # sync 00:05:01.035 08:42:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:01.035 08:42:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:01.035 08:42:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:01.294 08:42:39 -- spdk/autotest.sh@111 -- # uname -s 00:05:01.294 08:42:39 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:01.294 08:42:39 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:01.294 08:42:39 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:01.862 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.862 Hugepages 00:05:01.862 node hugesize free / total 00:05:02.120 node0 1048576kB 0 / 0 00:05:02.120 node0 2048kB 0 / 0 00:05:02.120 00:05:02.120 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:02.120 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:02.120 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:02.120 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:02.120 08:42:40 -- spdk/autotest.sh@117 -- # uname -s 00:05:02.120 08:42:40 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:02.120 08:42:40 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:02.120 08:42:40 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:03.057 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.057 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.057 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:03.057 08:42:40 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:03.993 08:42:41 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:03.993 08:42:41 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:03.993 08:42:41 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:03.993 08:42:41 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:03.993 08:42:41 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:03.993 08:42:41 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:03.993 08:42:41 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.993 08:42:41 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:03.993 08:42:41 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:04.251 08:42:42 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:04.251 08:42:42 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:04.251 08:42:42 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.508 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.508 Waiting for block devices as requested 00:05:04.508 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.767 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.767 08:42:42 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:04.767 08:42:42 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:04.767 08:42:42 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:04.767 08:42:42 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:04.767 08:42:42 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:04.767 08:42:42 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:04.767 08:42:42 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:04.767 08:42:42 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:04.767 08:42:42 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:04.767 08:42:42 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:04.767 08:42:42 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:04.767 08:42:42 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:04.767 08:42:42 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:04.767 08:42:42 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:04.767 08:42:42 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:04.767 08:42:42 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:04.767 08:42:42 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:04.767 08:42:42 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:04.767 08:42:42 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:04.767 08:42:42 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:04.767 08:42:42 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:04.767 08:42:42 -- common/autotest_common.sh@1541 -- # continue 00:05:04.767 08:42:42 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:04.767 08:42:42 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:04.767 08:42:42 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:04.767 08:42:42 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:04.767 08:42:42 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:04.767 08:42:42 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:04.767 08:42:42 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:04.767 08:42:42 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:04.767 08:42:42 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:04.767 08:42:42 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:04.767 08:42:42 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:04.767 08:42:42 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:04.767 08:42:42 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:04.767 08:42:42 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:04.767 08:42:42 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:04.767 08:42:42 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:04.767 08:42:42 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:04.767 08:42:42 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:04.767 08:42:42 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:04.767 08:42:42 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:04.767 08:42:42 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:04.767 08:42:42 -- common/autotest_common.sh@1541 -- # continue 00:05:04.767 08:42:42 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:04.767 08:42:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.767 08:42:42 -- common/autotest_common.sh@10 -- # set +x 00:05:04.767 08:42:42 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:04.767 08:42:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.767 08:42:42 -- common/autotest_common.sh@10 -- # set +x 00:05:04.767 08:42:42 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.704 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.704 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.704 08:42:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:05.704 08:42:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.704 08:42:43 -- common/autotest_common.sh@10 -- # set +x 00:05:05.704 08:42:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:05.704 08:42:43 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:05.704 08:42:43 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:05.704 08:42:43 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:05.704 08:42:43 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:05.704 08:42:43 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:05.704 08:42:43 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:05.704 08:42:43 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:05.704 08:42:43 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:05.704 08:42:43 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:05.704 08:42:43 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.704 08:42:43 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.704 08:42:43 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:05.704 08:42:43 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:05.704 08:42:43 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:05.704 08:42:43 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:05.704 08:42:43 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:05.963 08:42:43 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:05.963 08:42:43 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.963 08:42:43 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:05.963 08:42:43 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:05.963 08:42:43 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:05.963 08:42:43 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.963 08:42:43 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:05.963 08:42:43 -- common/autotest_common.sh@1570 -- # return 0 00:05:05.963 08:42:43 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:05.963 08:42:43 -- common/autotest_common.sh@1578 -- # return 0 00:05:05.963 08:42:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:05.963 08:42:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:05.963 08:42:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:05.963 08:42:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:05.963 08:42:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:05.963 08:42:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.963 08:42:43 -- common/autotest_common.sh@10 -- # set +x 00:05:05.963 08:42:43 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:05.963 08:42:43 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:05.963 08:42:43 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:05.963 08:42:43 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.963 08:42:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.963 08:42:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.963 08:42:43 -- common/autotest_common.sh@10 -- # set +x 00:05:05.963 ************************************ 00:05:05.963 START TEST env 00:05:05.963 ************************************ 00:05:05.963 08:42:43 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.963 * Looking for test storage... 00:05:05.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:05.963 08:42:43 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:05.963 08:42:43 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:05.963 08:42:43 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:05.963 08:42:43 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:05.963 08:42:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.963 08:42:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.963 08:42:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.963 08:42:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.963 08:42:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.963 08:42:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.963 08:42:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.963 08:42:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.963 08:42:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.963 08:42:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.963 08:42:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.963 08:42:43 env -- scripts/common.sh@344 -- # case "$op" in 00:05:05.963 08:42:43 env -- scripts/common.sh@345 -- # : 1 00:05:05.963 08:42:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.963 08:42:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.963 08:42:43 env -- scripts/common.sh@365 -- # decimal 1 00:05:05.963 08:42:43 env -- scripts/common.sh@353 -- # local d=1 00:05:05.963 08:42:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.963 08:42:43 env -- scripts/common.sh@355 -- # echo 1 00:05:05.963 08:42:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.963 08:42:43 env -- scripts/common.sh@366 -- # decimal 2 00:05:05.963 08:42:43 env -- scripts/common.sh@353 -- # local d=2 00:05:05.963 08:42:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.963 08:42:43 env -- scripts/common.sh@355 -- # echo 2 00:05:05.963 08:42:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.963 08:42:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.963 08:42:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.963 08:42:43 env -- scripts/common.sh@368 -- # return 0 00:05:05.963 08:42:43 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.963 08:42:43 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:05.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.963 --rc genhtml_branch_coverage=1 00:05:05.963 --rc genhtml_function_coverage=1 00:05:05.963 --rc genhtml_legend=1 00:05:05.963 --rc geninfo_all_blocks=1 00:05:05.963 --rc geninfo_unexecuted_blocks=1 00:05:05.963 00:05:05.963 ' 00:05:05.963 08:42:43 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:05.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.963 --rc genhtml_branch_coverage=1 00:05:05.963 --rc genhtml_function_coverage=1 00:05:05.963 --rc genhtml_legend=1 00:05:05.963 --rc geninfo_all_blocks=1 00:05:05.963 --rc geninfo_unexecuted_blocks=1 00:05:05.963 00:05:05.963 ' 00:05:05.963 08:42:43 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:05.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.963 --rc genhtml_branch_coverage=1 00:05:05.963 --rc genhtml_function_coverage=1 00:05:05.963 --rc genhtml_legend=1 00:05:05.963 --rc geninfo_all_blocks=1 00:05:05.963 --rc geninfo_unexecuted_blocks=1 00:05:05.963 00:05:05.963 ' 00:05:05.963 08:42:43 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:05.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.963 --rc genhtml_branch_coverage=1 00:05:05.963 --rc genhtml_function_coverage=1 00:05:05.963 --rc genhtml_legend=1 00:05:05.963 --rc geninfo_all_blocks=1 00:05:05.963 --rc geninfo_unexecuted_blocks=1 00:05:05.963 00:05:05.963 ' 00:05:05.963 08:42:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.963 08:42:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.963 08:42:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.963 08:42:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.963 ************************************ 00:05:05.963 START TEST env_memory 00:05:05.963 ************************************ 00:05:05.963 08:42:43 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.963 00:05:05.963 00:05:05.963 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.963 http://cunit.sourceforge.net/ 00:05:05.963 00:05:05.963 00:05:05.963 Suite: memory 00:05:06.221 Test: alloc and free memory map ...[2024-09-28 08:42:44.004361] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:06.221 passed 00:05:06.221 Test: mem map translation ...[2024-09-28 08:42:44.065176] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:06.221 [2024-09-28 08:42:44.065256] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:06.221 [2024-09-28 08:42:44.065356] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:06.221 [2024-09-28 08:42:44.065406] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:06.222 passed 00:05:06.222 Test: mem map registration ...[2024-09-28 08:42:44.163595] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:06.222 [2024-09-28 08:42:44.163669] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:06.222 passed 00:05:06.480 Test: mem map adjacent registrations ...passed 00:05:06.480 00:05:06.480 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.480 suites 1 1 n/a 0 0 00:05:06.480 tests 4 4 4 0 0 00:05:06.480 asserts 152 152 152 0 n/a 00:05:06.480 00:05:06.480 Elapsed time = 0.300 seconds 00:05:06.480 00:05:06.480 real 0m0.334s 00:05:06.480 user 0m0.300s 00:05:06.480 sys 0m0.025s 00:05:06.480 08:42:44 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.480 08:42:44 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:06.480 ************************************ 00:05:06.480 END TEST env_memory 00:05:06.480 ************************************ 00:05:06.480 08:42:44 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.480 08:42:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.480 08:42:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.480 08:42:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.480 ************************************ 00:05:06.480 START TEST env_vtophys 00:05:06.480 ************************************ 00:05:06.480 08:42:44 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.480 EAL: lib.eal log level changed from notice to debug 00:05:06.480 EAL: Detected lcore 0 as core 0 on socket 0 00:05:06.480 EAL: Detected lcore 1 as core 0 on socket 0 00:05:06.480 EAL: Detected lcore 2 as core 0 on socket 0 00:05:06.480 EAL: Detected lcore 3 as core 0 on socket 0 00:05:06.480 EAL: Detected lcore 4 as core 0 on socket 0 00:05:06.480 EAL: Detected lcore 5 as core 0 on socket 0 00:05:06.480 EAL: Detected lcore 6 as core 0 on socket 0 00:05:06.480 EAL: Detected lcore 7 as core 0 on socket 0 00:05:06.480 EAL: Detected lcore 8 as core 0 on socket 0 00:05:06.480 EAL: Detected lcore 9 as core 0 on socket 0 00:05:06.480 EAL: Maximum logical cores by configuration: 128 00:05:06.480 EAL: Detected CPU lcores: 10 00:05:06.480 EAL: Detected NUMA nodes: 1 00:05:06.480 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:06.480 EAL: Detected shared linkage of DPDK 00:05:06.480 EAL: No shared files mode enabled, IPC will be disabled 00:05:06.480 EAL: Selected IOVA mode 'PA' 00:05:06.480 EAL: Probing VFIO support... 00:05:06.480 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.480 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:06.480 EAL: Ask a virtual area of 0x2e000 bytes 00:05:06.480 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:06.480 EAL: Setting up physically contiguous memory... 00:05:06.480 EAL: Setting maximum number of open files to 524288 00:05:06.480 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:06.480 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:06.481 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.481 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:06.481 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.481 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.481 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:06.481 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:06.481 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.481 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:06.481 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.481 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.481 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:06.481 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:06.481 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.481 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:06.481 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.481 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.481 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:06.481 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:06.481 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.481 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:06.481 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.481 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.481 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:06.481 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:06.481 EAL: Hugepages will be freed exactly as allocated. 00:05:06.481 EAL: No shared files mode enabled, IPC is disabled 00:05:06.481 EAL: No shared files mode enabled, IPC is disabled 00:05:06.740 EAL: TSC frequency is ~2200000 KHz 00:05:06.740 EAL: Main lcore 0 is ready (tid=7efd300caa40;cpuset=[0]) 00:05:06.740 EAL: Trying to obtain current memory policy. 00:05:06.740 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.740 EAL: Restoring previous memory policy: 0 00:05:06.740 EAL: request: mp_malloc_sync 00:05:06.740 EAL: No shared files mode enabled, IPC is disabled 00:05:06.740 EAL: Heap on socket 0 was expanded by 2MB 00:05:06.740 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.740 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:06.740 EAL: Mem event callback 'spdk:(nil)' registered 00:05:06.740 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:06.740 00:05:06.740 00:05:06.740 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.740 http://cunit.sourceforge.net/ 00:05:06.740 00:05:06.740 00:05:06.740 Suite: components_suite 00:05:06.999 Test: vtophys_malloc_test ...passed 00:05:06.999 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:06.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.999 EAL: Restoring previous memory policy: 4 00:05:06.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.999 EAL: request: mp_malloc_sync 00:05:06.999 EAL: No shared files mode enabled, IPC is disabled 00:05:06.999 EAL: Heap on socket 0 was expanded by 4MB 00:05:06.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.999 EAL: request: mp_malloc_sync 00:05:06.999 EAL: No shared files mode enabled, IPC is disabled 00:05:06.999 EAL: Heap on socket 0 was shrunk by 4MB 00:05:06.999 EAL: Trying to obtain current memory policy. 00:05:06.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.999 EAL: Restoring previous memory policy: 4 00:05:06.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.999 EAL: request: mp_malloc_sync 00:05:06.999 EAL: No shared files mode enabled, IPC is disabled 00:05:06.999 EAL: Heap on socket 0 was expanded by 6MB 00:05:06.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.999 EAL: request: mp_malloc_sync 00:05:06.999 EAL: No shared files mode enabled, IPC is disabled 00:05:06.999 EAL: Heap on socket 0 was shrunk by 6MB 00:05:06.999 EAL: Trying to obtain current memory policy. 00:05:06.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.999 EAL: Restoring previous memory policy: 4 00:05:06.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.999 EAL: request: mp_malloc_sync 00:05:06.999 EAL: No shared files mode enabled, IPC is disabled 00:05:06.999 EAL: Heap on socket 0 was expanded by 10MB 00:05:06.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.999 EAL: request: mp_malloc_sync 00:05:06.999 EAL: No shared files mode enabled, IPC is disabled 00:05:06.999 EAL: Heap on socket 0 was shrunk by 10MB 00:05:06.999 EAL: Trying to obtain current memory policy. 00:05:06.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.999 EAL: Restoring previous memory policy: 4 00:05:06.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.999 EAL: request: mp_malloc_sync 00:05:06.999 EAL: No shared files mode enabled, IPC is disabled 00:05:06.999 EAL: Heap on socket 0 was expanded by 18MB 00:05:06.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.999 EAL: request: mp_malloc_sync 00:05:06.999 EAL: No shared files mode enabled, IPC is disabled 00:05:06.999 EAL: Heap on socket 0 was shrunk by 18MB 00:05:06.999 EAL: Trying to obtain current memory policy. 00:05:06.999 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.999 EAL: Restoring previous memory policy: 4 00:05:06.999 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.999 EAL: request: mp_malloc_sync 00:05:06.999 EAL: No shared files mode enabled, IPC is disabled 00:05:06.999 EAL: Heap on socket 0 was expanded by 34MB 00:05:07.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.258 EAL: request: mp_malloc_sync 00:05:07.258 EAL: No shared files mode enabled, IPC is disabled 00:05:07.258 EAL: Heap on socket 0 was shrunk by 34MB 00:05:07.258 EAL: Trying to obtain current memory policy. 00:05:07.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.258 EAL: Restoring previous memory policy: 4 00:05:07.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.258 EAL: request: mp_malloc_sync 00:05:07.258 EAL: No shared files mode enabled, IPC is disabled 00:05:07.258 EAL: Heap on socket 0 was expanded by 66MB 00:05:07.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.258 EAL: request: mp_malloc_sync 00:05:07.258 EAL: No shared files mode enabled, IPC is disabled 00:05:07.258 EAL: Heap on socket 0 was shrunk by 66MB 00:05:07.258 EAL: Trying to obtain current memory policy. 00:05:07.258 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.258 EAL: Restoring previous memory policy: 4 00:05:07.258 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.258 EAL: request: mp_malloc_sync 00:05:07.258 EAL: No shared files mode enabled, IPC is disabled 00:05:07.258 EAL: Heap on socket 0 was expanded by 130MB 00:05:07.517 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.517 EAL: request: mp_malloc_sync 00:05:07.517 EAL: No shared files mode enabled, IPC is disabled 00:05:07.517 EAL: Heap on socket 0 was shrunk by 130MB 00:05:07.776 EAL: Trying to obtain current memory policy. 00:05:07.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.776 EAL: Restoring previous memory policy: 4 00:05:07.776 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.776 EAL: request: mp_malloc_sync 00:05:07.776 EAL: No shared files mode enabled, IPC is disabled 00:05:07.776 EAL: Heap on socket 0 was expanded by 258MB 00:05:08.036 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.036 EAL: request: mp_malloc_sync 00:05:08.036 EAL: No shared files mode enabled, IPC is disabled 00:05:08.036 EAL: Heap on socket 0 was shrunk by 258MB 00:05:08.296 EAL: Trying to obtain current memory policy. 00:05:08.296 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.554 EAL: Restoring previous memory policy: 4 00:05:08.554 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.554 EAL: request: mp_malloc_sync 00:05:08.554 EAL: No shared files mode enabled, IPC is disabled 00:05:08.554 EAL: Heap on socket 0 was expanded by 514MB 00:05:09.122 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.122 EAL: request: mp_malloc_sync 00:05:09.122 EAL: No shared files mode enabled, IPC is disabled 00:05:09.122 EAL: Heap on socket 0 was shrunk by 514MB 00:05:09.690 EAL: Trying to obtain current memory policy. 00:05:09.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.948 EAL: Restoring previous memory policy: 4 00:05:09.948 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.948 EAL: request: mp_malloc_sync 00:05:09.948 EAL: No shared files mode enabled, IPC is disabled 00:05:09.948 EAL: Heap on socket 0 was expanded by 1026MB 00:05:11.327 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.327 EAL: request: mp_malloc_sync 00:05:11.327 EAL: No shared files mode enabled, IPC is disabled 00:05:11.327 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:12.705 passed 00:05:12.705 00:05:12.705 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.705 suites 1 1 n/a 0 0 00:05:12.705 tests 2 2 2 0 0 00:05:12.705 asserts 5775 5775 5775 0 n/a 00:05:12.705 00:05:12.705 Elapsed time = 5.687 seconds 00:05:12.705 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.705 EAL: request: mp_malloc_sync 00:05:12.705 EAL: No shared files mode enabled, IPC is disabled 00:05:12.705 EAL: Heap on socket 0 was shrunk by 2MB 00:05:12.705 EAL: No shared files mode enabled, IPC is disabled 00:05:12.705 EAL: No shared files mode enabled, IPC is disabled 00:05:12.705 EAL: No shared files mode enabled, IPC is disabled 00:05:12.705 00:05:12.705 real 0m5.999s 00:05:12.705 user 0m5.220s 00:05:12.705 sys 0m0.615s 00:05:12.705 08:42:50 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.705 08:42:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:12.705 ************************************ 00:05:12.705 END TEST env_vtophys 00:05:12.705 ************************************ 00:05:12.705 08:42:50 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:12.706 08:42:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.706 08:42:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.706 08:42:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.706 ************************************ 00:05:12.706 START TEST env_pci 00:05:12.706 ************************************ 00:05:12.706 08:42:50 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:12.706 00:05:12.706 00:05:12.706 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.706 http://cunit.sourceforge.net/ 00:05:12.706 00:05:12.706 00:05:12.706 Suite: pci 00:05:12.706 Test: pci_hook ...[2024-09-28 08:42:50.406054] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57272 has claimed it 00:05:12.706 passed 00:05:12.706 00:05:12.706 EAL: Cannot find device (10000:00:01.0) 00:05:12.706 EAL: Failed to attach device on primary process 00:05:12.706 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.706 suites 1 1 n/a 0 0 00:05:12.706 tests 1 1 1 0 0 00:05:12.706 asserts 25 25 25 0 n/a 00:05:12.706 00:05:12.706 Elapsed time = 0.007 seconds 00:05:12.706 00:05:12.706 real 0m0.075s 00:05:12.706 user 0m0.034s 00:05:12.706 sys 0m0.040s 00:05:12.706 08:42:50 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.706 08:42:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:12.706 ************************************ 00:05:12.706 END TEST env_pci 00:05:12.706 ************************************ 00:05:12.706 08:42:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:12.706 08:42:50 env -- env/env.sh@15 -- # uname 00:05:12.706 08:42:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:12.706 08:42:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:12.706 08:42:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.706 08:42:50 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:12.706 08:42:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.706 08:42:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.706 ************************************ 00:05:12.706 START TEST env_dpdk_post_init 00:05:12.706 ************************************ 00:05:12.706 08:42:50 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.706 EAL: Detected CPU lcores: 10 00:05:12.706 EAL: Detected NUMA nodes: 1 00:05:12.706 EAL: Detected shared linkage of DPDK 00:05:12.706 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.706 EAL: Selected IOVA mode 'PA' 00:05:12.965 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.965 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:12.965 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:12.965 Starting DPDK initialization... 00:05:12.965 Starting SPDK post initialization... 00:05:12.965 SPDK NVMe probe 00:05:12.965 Attaching to 0000:00:10.0 00:05:12.965 Attaching to 0000:00:11.0 00:05:12.965 Attached to 0000:00:10.0 00:05:12.965 Attached to 0000:00:11.0 00:05:12.965 Cleaning up... 00:05:12.965 00:05:12.965 real 0m0.272s 00:05:12.965 user 0m0.082s 00:05:12.965 sys 0m0.089s 00:05:12.965 08:42:50 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.965 08:42:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.965 ************************************ 00:05:12.965 END TEST env_dpdk_post_init 00:05:12.965 ************************************ 00:05:12.965 08:42:50 env -- env/env.sh@26 -- # uname 00:05:12.965 08:42:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:12.965 08:42:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.965 08:42:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.965 08:42:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.965 08:42:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.965 ************************************ 00:05:12.965 START TEST env_mem_callbacks 00:05:12.965 ************************************ 00:05:12.965 08:42:50 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.965 EAL: Detected CPU lcores: 10 00:05:12.965 EAL: Detected NUMA nodes: 1 00:05:12.965 EAL: Detected shared linkage of DPDK 00:05:12.965 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.965 EAL: Selected IOVA mode 'PA' 00:05:13.225 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:13.225 00:05:13.225 00:05:13.225 CUnit - A unit testing framework for C - Version 2.1-3 00:05:13.225 http://cunit.sourceforge.net/ 00:05:13.225 00:05:13.225 00:05:13.225 Suite: memory 00:05:13.225 Test: test ... 00:05:13.225 register 0x200000200000 2097152 00:05:13.225 malloc 3145728 00:05:13.225 register 0x200000400000 4194304 00:05:13.225 buf 0x2000004fffc0 len 3145728 PASSED 00:05:13.225 malloc 64 00:05:13.225 buf 0x2000004ffec0 len 64 PASSED 00:05:13.225 malloc 4194304 00:05:13.225 register 0x200000800000 6291456 00:05:13.225 buf 0x2000009fffc0 len 4194304 PASSED 00:05:13.225 free 0x2000004fffc0 3145728 00:05:13.225 free 0x2000004ffec0 64 00:05:13.225 unregister 0x200000400000 4194304 PASSED 00:05:13.225 free 0x2000009fffc0 4194304 00:05:13.225 unregister 0x200000800000 6291456 PASSED 00:05:13.225 malloc 8388608 00:05:13.225 register 0x200000400000 10485760 00:05:13.225 buf 0x2000005fffc0 len 8388608 PASSED 00:05:13.225 free 0x2000005fffc0 8388608 00:05:13.225 unregister 0x200000400000 10485760 PASSED 00:05:13.225 passed 00:05:13.225 00:05:13.225 Run Summary: Type Total Ran Passed Failed Inactive 00:05:13.225 suites 1 1 n/a 0 0 00:05:13.225 tests 1 1 1 0 0 00:05:13.225 asserts 15 15 15 0 n/a 00:05:13.225 00:05:13.225 Elapsed time = 0.056 seconds 00:05:13.225 00:05:13.225 real 0m0.240s 00:05:13.225 user 0m0.085s 00:05:13.225 sys 0m0.052s 00:05:13.225 08:42:51 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.225 08:42:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:13.225 ************************************ 00:05:13.225 END TEST env_mem_callbacks 00:05:13.225 ************************************ 00:05:13.225 00:05:13.225 real 0m7.382s 00:05:13.225 user 0m5.924s 00:05:13.225 sys 0m1.063s 00:05:13.225 ************************************ 00:05:13.225 END TEST env 00:05:13.225 ************************************ 00:05:13.225 08:42:51 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.225 08:42:51 env -- common/autotest_common.sh@10 -- # set +x 00:05:13.225 08:42:51 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:13.225 08:42:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.225 08:42:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.225 08:42:51 -- common/autotest_common.sh@10 -- # set +x 00:05:13.225 ************************************ 00:05:13.225 START TEST rpc 00:05:13.225 ************************************ 00:05:13.225 08:42:51 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:13.485 * Looking for test storage... 00:05:13.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:13.485 08:42:51 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.485 08:42:51 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.485 08:42:51 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.485 08:42:51 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.485 08:42:51 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.485 08:42:51 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.485 08:42:51 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.485 08:42:51 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.485 08:42:51 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.485 08:42:51 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.485 08:42:51 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.485 08:42:51 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:13.485 08:42:51 rpc -- scripts/common.sh@345 -- # : 1 00:05:13.485 08:42:51 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.485 08:42:51 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.485 08:42:51 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:13.485 08:42:51 rpc -- scripts/common.sh@353 -- # local d=1 00:05:13.485 08:42:51 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.485 08:42:51 rpc -- scripts/common.sh@355 -- # echo 1 00:05:13.485 08:42:51 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.485 08:42:51 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:13.485 08:42:51 rpc -- scripts/common.sh@353 -- # local d=2 00:05:13.485 08:42:51 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.485 08:42:51 rpc -- scripts/common.sh@355 -- # echo 2 00:05:13.485 08:42:51 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.485 08:42:51 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.485 08:42:51 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.485 08:42:51 rpc -- scripts/common.sh@368 -- # return 0 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:13.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.485 --rc genhtml_branch_coverage=1 00:05:13.485 --rc genhtml_function_coverage=1 00:05:13.485 --rc genhtml_legend=1 00:05:13.485 --rc geninfo_all_blocks=1 00:05:13.485 --rc geninfo_unexecuted_blocks=1 00:05:13.485 00:05:13.485 ' 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:13.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.485 --rc genhtml_branch_coverage=1 00:05:13.485 --rc genhtml_function_coverage=1 00:05:13.485 --rc genhtml_legend=1 00:05:13.485 --rc geninfo_all_blocks=1 00:05:13.485 --rc geninfo_unexecuted_blocks=1 00:05:13.485 00:05:13.485 ' 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:13.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.485 --rc genhtml_branch_coverage=1 00:05:13.485 --rc genhtml_function_coverage=1 00:05:13.485 --rc genhtml_legend=1 00:05:13.485 --rc geninfo_all_blocks=1 00:05:13.485 --rc geninfo_unexecuted_blocks=1 00:05:13.485 00:05:13.485 ' 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:13.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.485 --rc genhtml_branch_coverage=1 00:05:13.485 --rc genhtml_function_coverage=1 00:05:13.485 --rc genhtml_legend=1 00:05:13.485 --rc geninfo_all_blocks=1 00:05:13.485 --rc geninfo_unexecuted_blocks=1 00:05:13.485 00:05:13.485 ' 00:05:13.485 08:42:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57399 00:05:13.485 08:42:51 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:13.485 08:42:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.485 08:42:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57399 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@831 -- # '[' -z 57399 ']' 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:13.485 08:42:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.745 [2024-09-28 08:42:51.481566] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:13.745 [2024-09-28 08:42:51.481741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57399 ] 00:05:13.745 [2024-09-28 08:42:51.651616] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.004 [2024-09-28 08:42:51.816940] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:14.004 [2024-09-28 08:42:51.817002] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57399' to capture a snapshot of events at runtime. 00:05:14.004 [2024-09-28 08:42:51.817020] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:14.004 [2024-09-28 08:42:51.817035] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:14.004 [2024-09-28 08:42:51.817047] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57399 for offline analysis/debug. 00:05:14.004 [2024-09-28 08:42:51.817102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.263 [2024-09-28 08:42:52.010096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.523 08:42:52 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:14.523 08:42:52 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:14.523 08:42:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.523 08:42:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.523 08:42:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:14.523 08:42:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:14.523 08:42:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.523 08:42:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.523 08:42:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.523 ************************************ 00:05:14.523 START TEST rpc_integrity 00:05:14.523 ************************************ 00:05:14.523 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:14.523 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.523 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.523 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.523 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.523 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.523 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:14.783 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.783 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.783 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.783 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.783 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.783 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:14.783 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.783 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.783 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.783 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.783 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.783 { 00:05:14.783 "name": "Malloc0", 00:05:14.783 "aliases": [ 00:05:14.783 "b53a5f3c-7bcd-44e9-b6cb-a25424de3e1f" 00:05:14.783 ], 00:05:14.783 "product_name": "Malloc disk", 00:05:14.783 "block_size": 512, 00:05:14.783 "num_blocks": 16384, 00:05:14.783 "uuid": "b53a5f3c-7bcd-44e9-b6cb-a25424de3e1f", 00:05:14.783 "assigned_rate_limits": { 00:05:14.783 "rw_ios_per_sec": 0, 00:05:14.783 "rw_mbytes_per_sec": 0, 00:05:14.783 "r_mbytes_per_sec": 0, 00:05:14.783 "w_mbytes_per_sec": 0 00:05:14.783 }, 00:05:14.783 "claimed": false, 00:05:14.783 "zoned": false, 00:05:14.783 "supported_io_types": { 00:05:14.783 "read": true, 00:05:14.783 "write": true, 00:05:14.783 "unmap": true, 00:05:14.783 "flush": true, 00:05:14.783 "reset": true, 00:05:14.783 "nvme_admin": false, 00:05:14.783 "nvme_io": false, 00:05:14.783 "nvme_io_md": false, 00:05:14.783 "write_zeroes": true, 00:05:14.783 "zcopy": true, 00:05:14.783 "get_zone_info": false, 00:05:14.783 "zone_management": false, 00:05:14.783 "zone_append": false, 00:05:14.783 "compare": false, 00:05:14.783 "compare_and_write": false, 00:05:14.783 "abort": true, 00:05:14.783 "seek_hole": false, 00:05:14.783 "seek_data": false, 00:05:14.783 "copy": true, 00:05:14.783 "nvme_iov_md": false 00:05:14.783 }, 00:05:14.783 "memory_domains": [ 00:05:14.783 { 00:05:14.783 "dma_device_id": "system", 00:05:14.783 "dma_device_type": 1 00:05:14.783 }, 00:05:14.783 { 00:05:14.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.783 "dma_device_type": 2 00:05:14.783 } 00:05:14.783 ], 00:05:14.783 "driver_specific": {} 00:05:14.783 } 00:05:14.783 ]' 00:05:14.783 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:14.783 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.783 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:14.783 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.783 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.783 [2024-09-28 08:42:52.647295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:14.783 [2024-09-28 08:42:52.647388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.783 [2024-09-28 08:42:52.647437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:05:14.783 [2024-09-28 08:42:52.647457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.783 [2024-09-28 08:42:52.650197] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.783 [2024-09-28 08:42:52.650268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.783 Passthru0 00:05:14.783 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.783 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.783 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.783 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.783 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.783 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.783 { 00:05:14.783 "name": "Malloc0", 00:05:14.783 "aliases": [ 00:05:14.783 "b53a5f3c-7bcd-44e9-b6cb-a25424de3e1f" 00:05:14.783 ], 00:05:14.783 "product_name": "Malloc disk", 00:05:14.783 "block_size": 512, 00:05:14.783 "num_blocks": 16384, 00:05:14.783 "uuid": "b53a5f3c-7bcd-44e9-b6cb-a25424de3e1f", 00:05:14.783 "assigned_rate_limits": { 00:05:14.783 "rw_ios_per_sec": 0, 00:05:14.783 "rw_mbytes_per_sec": 0, 00:05:14.783 "r_mbytes_per_sec": 0, 00:05:14.783 "w_mbytes_per_sec": 0 00:05:14.783 }, 00:05:14.783 "claimed": true, 00:05:14.783 "claim_type": "exclusive_write", 00:05:14.783 "zoned": false, 00:05:14.783 "supported_io_types": { 00:05:14.783 "read": true, 00:05:14.783 "write": true, 00:05:14.783 "unmap": true, 00:05:14.783 "flush": true, 00:05:14.783 "reset": true, 00:05:14.783 "nvme_admin": false, 00:05:14.783 "nvme_io": false, 00:05:14.783 "nvme_io_md": false, 00:05:14.783 "write_zeroes": true, 00:05:14.783 "zcopy": true, 00:05:14.783 "get_zone_info": false, 00:05:14.783 "zone_management": false, 00:05:14.783 "zone_append": false, 00:05:14.783 "compare": false, 00:05:14.783 "compare_and_write": false, 00:05:14.784 "abort": true, 00:05:14.784 "seek_hole": false, 00:05:14.784 "seek_data": false, 00:05:14.784 "copy": true, 00:05:14.784 "nvme_iov_md": false 00:05:14.784 }, 00:05:14.784 "memory_domains": [ 00:05:14.784 { 00:05:14.784 "dma_device_id": "system", 00:05:14.784 "dma_device_type": 1 00:05:14.784 }, 00:05:14.784 { 00:05:14.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.784 "dma_device_type": 2 00:05:14.784 } 00:05:14.784 ], 00:05:14.784 "driver_specific": {} 00:05:14.784 }, 00:05:14.784 { 00:05:14.784 "name": "Passthru0", 00:05:14.784 "aliases": [ 00:05:14.784 "760cf73b-a082-5f33-95e6-e318c03ef7b5" 00:05:14.784 ], 00:05:14.784 "product_name": "passthru", 00:05:14.784 "block_size": 512, 00:05:14.784 "num_blocks": 16384, 00:05:14.784 "uuid": "760cf73b-a082-5f33-95e6-e318c03ef7b5", 00:05:14.784 "assigned_rate_limits": { 00:05:14.784 "rw_ios_per_sec": 0, 00:05:14.784 "rw_mbytes_per_sec": 0, 00:05:14.784 "r_mbytes_per_sec": 0, 00:05:14.784 "w_mbytes_per_sec": 0 00:05:14.784 }, 00:05:14.784 "claimed": false, 00:05:14.784 "zoned": false, 00:05:14.784 "supported_io_types": { 00:05:14.784 "read": true, 00:05:14.784 "write": true, 00:05:14.784 "unmap": true, 00:05:14.784 "flush": true, 00:05:14.784 "reset": true, 00:05:14.784 "nvme_admin": false, 00:05:14.784 "nvme_io": false, 00:05:14.784 "nvme_io_md": false, 00:05:14.784 "write_zeroes": true, 00:05:14.784 "zcopy": true, 00:05:14.784 "get_zone_info": false, 00:05:14.784 "zone_management": false, 00:05:14.784 "zone_append": false, 00:05:14.784 "compare": false, 00:05:14.784 "compare_and_write": false, 00:05:14.784 "abort": true, 00:05:14.784 "seek_hole": false, 00:05:14.784 "seek_data": false, 00:05:14.784 "copy": true, 00:05:14.784 "nvme_iov_md": false 00:05:14.784 }, 00:05:14.784 "memory_domains": [ 00:05:14.784 { 00:05:14.784 "dma_device_id": "system", 00:05:14.784 "dma_device_type": 1 00:05:14.784 }, 00:05:14.784 { 00:05:14.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.784 "dma_device_type": 2 00:05:14.784 } 00:05:14.784 ], 00:05:14.784 "driver_specific": { 00:05:14.784 "passthru": { 00:05:14.784 "name": "Passthru0", 00:05:14.784 "base_bdev_name": "Malloc0" 00:05:14.784 } 00:05:14.784 } 00:05:14.784 } 00:05:14.784 ]' 00:05:14.784 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:14.784 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.784 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.784 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.784 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.784 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.784 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:14.784 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.784 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.784 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.784 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.784 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.784 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.784 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.784 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.042 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:15.042 08:42:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:15.042 00:05:15.042 real 0m0.347s 00:05:15.042 user 0m0.216s 00:05:15.042 sys 0m0.045s 00:05:15.042 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.042 08:42:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.042 ************************************ 00:05:15.042 END TEST rpc_integrity 00:05:15.042 ************************************ 00:05:15.042 08:42:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:15.042 08:42:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.042 08:42:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.042 08:42:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.042 ************************************ 00:05:15.042 START TEST rpc_plugins 00:05:15.042 ************************************ 00:05:15.042 08:42:52 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:15.042 08:42:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:15.042 08:42:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.042 08:42:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.042 08:42:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.042 08:42:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:15.042 08:42:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:15.042 08:42:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.042 08:42:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.042 08:42:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.042 08:42:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:15.042 { 00:05:15.042 "name": "Malloc1", 00:05:15.042 "aliases": [ 00:05:15.042 "94f1c24d-00a1-41ee-a3bb-fbac001063aa" 00:05:15.042 ], 00:05:15.042 "product_name": "Malloc disk", 00:05:15.042 "block_size": 4096, 00:05:15.042 "num_blocks": 256, 00:05:15.042 "uuid": "94f1c24d-00a1-41ee-a3bb-fbac001063aa", 00:05:15.042 "assigned_rate_limits": { 00:05:15.042 "rw_ios_per_sec": 0, 00:05:15.042 "rw_mbytes_per_sec": 0, 00:05:15.042 "r_mbytes_per_sec": 0, 00:05:15.042 "w_mbytes_per_sec": 0 00:05:15.042 }, 00:05:15.042 "claimed": false, 00:05:15.043 "zoned": false, 00:05:15.043 "supported_io_types": { 00:05:15.043 "read": true, 00:05:15.043 "write": true, 00:05:15.043 "unmap": true, 00:05:15.043 "flush": true, 00:05:15.043 "reset": true, 00:05:15.043 "nvme_admin": false, 00:05:15.043 "nvme_io": false, 00:05:15.043 "nvme_io_md": false, 00:05:15.043 "write_zeroes": true, 00:05:15.043 "zcopy": true, 00:05:15.043 "get_zone_info": false, 00:05:15.043 "zone_management": false, 00:05:15.043 "zone_append": false, 00:05:15.043 "compare": false, 00:05:15.043 "compare_and_write": false, 00:05:15.043 "abort": true, 00:05:15.043 "seek_hole": false, 00:05:15.043 "seek_data": false, 00:05:15.043 "copy": true, 00:05:15.043 "nvme_iov_md": false 00:05:15.043 }, 00:05:15.043 "memory_domains": [ 00:05:15.043 { 00:05:15.043 "dma_device_id": "system", 00:05:15.043 "dma_device_type": 1 00:05:15.043 }, 00:05:15.043 { 00:05:15.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.043 "dma_device_type": 2 00:05:15.043 } 00:05:15.043 ], 00:05:15.043 "driver_specific": {} 00:05:15.043 } 00:05:15.043 ]' 00:05:15.043 08:42:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:15.043 08:42:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:15.043 08:42:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:15.043 08:42:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.043 08:42:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.043 08:42:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.043 08:42:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:15.043 08:42:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.043 08:42:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.043 08:42:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.043 08:42:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:15.043 08:42:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:15.302 08:42:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:15.302 00:05:15.302 real 0m0.164s 00:05:15.302 user 0m0.100s 00:05:15.302 sys 0m0.023s 00:05:15.302 08:42:53 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.302 ************************************ 00:05:15.302 END TEST rpc_plugins 00:05:15.302 ************************************ 00:05:15.302 08:42:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.302 08:42:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:15.302 08:42:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.302 08:42:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.302 08:42:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.302 ************************************ 00:05:15.302 START TEST rpc_trace_cmd_test 00:05:15.302 ************************************ 00:05:15.302 08:42:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:15.302 08:42:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:15.302 08:42:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:15.302 08:42:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.302 08:42:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.302 08:42:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.302 08:42:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:15.302 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57399", 00:05:15.302 "tpoint_group_mask": "0x8", 00:05:15.302 "iscsi_conn": { 00:05:15.302 "mask": "0x2", 00:05:15.302 "tpoint_mask": "0x0" 00:05:15.302 }, 00:05:15.302 "scsi": { 00:05:15.302 "mask": "0x4", 00:05:15.302 "tpoint_mask": "0x0" 00:05:15.302 }, 00:05:15.302 "bdev": { 00:05:15.302 "mask": "0x8", 00:05:15.302 "tpoint_mask": "0xffffffffffffffff" 00:05:15.302 }, 00:05:15.302 "nvmf_rdma": { 00:05:15.303 "mask": "0x10", 00:05:15.303 "tpoint_mask": "0x0" 00:05:15.303 }, 00:05:15.303 "nvmf_tcp": { 00:05:15.303 "mask": "0x20", 00:05:15.303 "tpoint_mask": "0x0" 00:05:15.303 }, 00:05:15.303 "ftl": { 00:05:15.303 "mask": "0x40", 00:05:15.303 "tpoint_mask": "0x0" 00:05:15.303 }, 00:05:15.303 "blobfs": { 00:05:15.303 "mask": "0x80", 00:05:15.303 "tpoint_mask": "0x0" 00:05:15.303 }, 00:05:15.303 "dsa": { 00:05:15.303 "mask": "0x200", 00:05:15.303 "tpoint_mask": "0x0" 00:05:15.303 }, 00:05:15.303 "thread": { 00:05:15.303 "mask": "0x400", 00:05:15.303 "tpoint_mask": "0x0" 00:05:15.303 }, 00:05:15.303 "nvme_pcie": { 00:05:15.303 "mask": "0x800", 00:05:15.303 "tpoint_mask": "0x0" 00:05:15.303 }, 00:05:15.303 "iaa": { 00:05:15.303 "mask": "0x1000", 00:05:15.303 "tpoint_mask": "0x0" 00:05:15.303 }, 00:05:15.303 "nvme_tcp": { 00:05:15.303 "mask": "0x2000", 00:05:15.303 "tpoint_mask": "0x0" 00:05:15.303 }, 00:05:15.303 "bdev_nvme": { 00:05:15.303 "mask": "0x4000", 00:05:15.303 "tpoint_mask": "0x0" 00:05:15.303 }, 00:05:15.303 "sock": { 00:05:15.303 "mask": "0x8000", 00:05:15.303 "tpoint_mask": "0x0" 00:05:15.303 }, 00:05:15.303 "blob": { 00:05:15.303 "mask": "0x10000", 00:05:15.303 "tpoint_mask": "0x0" 00:05:15.303 }, 00:05:15.303 "bdev_raid": { 00:05:15.303 "mask": "0x20000", 00:05:15.303 "tpoint_mask": "0x0" 00:05:15.303 } 00:05:15.303 }' 00:05:15.303 08:42:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:15.303 08:42:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:15.303 08:42:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:15.303 08:42:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:15.303 08:42:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:15.303 08:42:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:15.303 08:42:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:15.562 08:42:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:15.562 08:42:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:15.562 08:42:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:15.562 00:05:15.562 real 0m0.275s 00:05:15.562 user 0m0.240s 00:05:15.562 sys 0m0.022s 00:05:15.562 08:42:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.562 ************************************ 00:05:15.562 END TEST rpc_trace_cmd_test 00:05:15.562 08:42:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.562 ************************************ 00:05:15.562 08:42:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:15.562 08:42:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:15.562 08:42:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:15.562 08:42:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.562 08:42:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.562 08:42:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.562 ************************************ 00:05:15.562 START TEST rpc_daemon_integrity 00:05:15.562 ************************************ 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.562 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.563 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:15.563 { 00:05:15.563 "name": "Malloc2", 00:05:15.563 "aliases": [ 00:05:15.563 "495022b1-9c44-4901-876a-4ec98690d672" 00:05:15.563 ], 00:05:15.563 "product_name": "Malloc disk", 00:05:15.563 "block_size": 512, 00:05:15.563 "num_blocks": 16384, 00:05:15.563 "uuid": "495022b1-9c44-4901-876a-4ec98690d672", 00:05:15.563 "assigned_rate_limits": { 00:05:15.563 "rw_ios_per_sec": 0, 00:05:15.563 "rw_mbytes_per_sec": 0, 00:05:15.563 "r_mbytes_per_sec": 0, 00:05:15.563 "w_mbytes_per_sec": 0 00:05:15.563 }, 00:05:15.563 "claimed": false, 00:05:15.563 "zoned": false, 00:05:15.563 "supported_io_types": { 00:05:15.563 "read": true, 00:05:15.563 "write": true, 00:05:15.563 "unmap": true, 00:05:15.563 "flush": true, 00:05:15.563 "reset": true, 00:05:15.563 "nvme_admin": false, 00:05:15.563 "nvme_io": false, 00:05:15.563 "nvme_io_md": false, 00:05:15.563 "write_zeroes": true, 00:05:15.563 "zcopy": true, 00:05:15.563 "get_zone_info": false, 00:05:15.563 "zone_management": false, 00:05:15.563 "zone_append": false, 00:05:15.563 "compare": false, 00:05:15.563 "compare_and_write": false, 00:05:15.563 "abort": true, 00:05:15.563 "seek_hole": false, 00:05:15.563 "seek_data": false, 00:05:15.563 "copy": true, 00:05:15.563 "nvme_iov_md": false 00:05:15.563 }, 00:05:15.563 "memory_domains": [ 00:05:15.563 { 00:05:15.563 "dma_device_id": "system", 00:05:15.563 "dma_device_type": 1 00:05:15.563 }, 00:05:15.563 { 00:05:15.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.563 "dma_device_type": 2 00:05:15.563 } 00:05:15.563 ], 00:05:15.563 "driver_specific": {} 00:05:15.563 } 00:05:15.563 ]' 00:05:15.563 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:15.823 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:15.823 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:15.823 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.823 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.823 [2024-09-28 08:42:53.580652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:15.823 [2024-09-28 08:42:53.580738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.823 [2024-09-28 08:42:53.580771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:05:15.823 [2024-09-28 08:42:53.580785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.824 [2024-09-28 08:42:53.583507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.824 [2024-09-28 08:42:53.583563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:15.824 Passthru0 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.824 { 00:05:15.824 "name": "Malloc2", 00:05:15.824 "aliases": [ 00:05:15.824 "495022b1-9c44-4901-876a-4ec98690d672" 00:05:15.824 ], 00:05:15.824 "product_name": "Malloc disk", 00:05:15.824 "block_size": 512, 00:05:15.824 "num_blocks": 16384, 00:05:15.824 "uuid": "495022b1-9c44-4901-876a-4ec98690d672", 00:05:15.824 "assigned_rate_limits": { 00:05:15.824 "rw_ios_per_sec": 0, 00:05:15.824 "rw_mbytes_per_sec": 0, 00:05:15.824 "r_mbytes_per_sec": 0, 00:05:15.824 "w_mbytes_per_sec": 0 00:05:15.824 }, 00:05:15.824 "claimed": true, 00:05:15.824 "claim_type": "exclusive_write", 00:05:15.824 "zoned": false, 00:05:15.824 "supported_io_types": { 00:05:15.824 "read": true, 00:05:15.824 "write": true, 00:05:15.824 "unmap": true, 00:05:15.824 "flush": true, 00:05:15.824 "reset": true, 00:05:15.824 "nvme_admin": false, 00:05:15.824 "nvme_io": false, 00:05:15.824 "nvme_io_md": false, 00:05:15.824 "write_zeroes": true, 00:05:15.824 "zcopy": true, 00:05:15.824 "get_zone_info": false, 00:05:15.824 "zone_management": false, 00:05:15.824 "zone_append": false, 00:05:15.824 "compare": false, 00:05:15.824 "compare_and_write": false, 00:05:15.824 "abort": true, 00:05:15.824 "seek_hole": false, 00:05:15.824 "seek_data": false, 00:05:15.824 "copy": true, 00:05:15.824 "nvme_iov_md": false 00:05:15.824 }, 00:05:15.824 "memory_domains": [ 00:05:15.824 { 00:05:15.824 "dma_device_id": "system", 00:05:15.824 "dma_device_type": 1 00:05:15.824 }, 00:05:15.824 { 00:05:15.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.824 "dma_device_type": 2 00:05:15.824 } 00:05:15.824 ], 00:05:15.824 "driver_specific": {} 00:05:15.824 }, 00:05:15.824 { 00:05:15.824 "name": "Passthru0", 00:05:15.824 "aliases": [ 00:05:15.824 "99d3c130-f6a1-5e1f-9a40-1609e38c09c3" 00:05:15.824 ], 00:05:15.824 "product_name": "passthru", 00:05:15.824 "block_size": 512, 00:05:15.824 "num_blocks": 16384, 00:05:15.824 "uuid": "99d3c130-f6a1-5e1f-9a40-1609e38c09c3", 00:05:15.824 "assigned_rate_limits": { 00:05:15.824 "rw_ios_per_sec": 0, 00:05:15.824 "rw_mbytes_per_sec": 0, 00:05:15.824 "r_mbytes_per_sec": 0, 00:05:15.824 "w_mbytes_per_sec": 0 00:05:15.824 }, 00:05:15.824 "claimed": false, 00:05:15.824 "zoned": false, 00:05:15.824 "supported_io_types": { 00:05:15.824 "read": true, 00:05:15.824 "write": true, 00:05:15.824 "unmap": true, 00:05:15.824 "flush": true, 00:05:15.824 "reset": true, 00:05:15.824 "nvme_admin": false, 00:05:15.824 "nvme_io": false, 00:05:15.824 "nvme_io_md": false, 00:05:15.824 "write_zeroes": true, 00:05:15.824 "zcopy": true, 00:05:15.824 "get_zone_info": false, 00:05:15.824 "zone_management": false, 00:05:15.824 "zone_append": false, 00:05:15.824 "compare": false, 00:05:15.824 "compare_and_write": false, 00:05:15.824 "abort": true, 00:05:15.824 "seek_hole": false, 00:05:15.824 "seek_data": false, 00:05:15.824 "copy": true, 00:05:15.824 "nvme_iov_md": false 00:05:15.824 }, 00:05:15.824 "memory_domains": [ 00:05:15.824 { 00:05:15.824 "dma_device_id": "system", 00:05:15.824 "dma_device_type": 1 00:05:15.824 }, 00:05:15.824 { 00:05:15.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.824 "dma_device_type": 2 00:05:15.824 } 00:05:15.824 ], 00:05:15.824 "driver_specific": { 00:05:15.824 "passthru": { 00:05:15.824 "name": "Passthru0", 00:05:15.824 "base_bdev_name": "Malloc2" 00:05:15.824 } 00:05:15.824 } 00:05:15.824 } 00:05:15.824 ]' 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:15.824 00:05:15.824 real 0m0.338s 00:05:15.824 user 0m0.213s 00:05:15.824 sys 0m0.042s 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.824 ************************************ 00:05:15.824 END TEST rpc_daemon_integrity 00:05:15.824 ************************************ 00:05:15.824 08:42:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.824 08:42:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:15.824 08:42:53 rpc -- rpc/rpc.sh@84 -- # killprocess 57399 00:05:15.824 08:42:53 rpc -- common/autotest_common.sh@950 -- # '[' -z 57399 ']' 00:05:15.824 08:42:53 rpc -- common/autotest_common.sh@954 -- # kill -0 57399 00:05:15.824 08:42:53 rpc -- common/autotest_common.sh@955 -- # uname 00:05:16.102 08:42:53 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:16.102 08:42:53 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57399 00:05:16.102 08:42:53 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:16.102 08:42:53 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:16.102 08:42:53 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57399' 00:05:16.102 killing process with pid 57399 00:05:16.102 08:42:53 rpc -- common/autotest_common.sh@969 -- # kill 57399 00:05:16.102 08:42:53 rpc -- common/autotest_common.sh@974 -- # wait 57399 00:05:18.019 00:05:18.019 real 0m4.561s 00:05:18.019 user 0m5.286s 00:05:18.019 sys 0m0.764s 00:05:18.019 08:42:55 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.019 08:42:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.019 ************************************ 00:05:18.019 END TEST rpc 00:05:18.019 ************************************ 00:05:18.019 08:42:55 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:18.019 08:42:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.019 08:42:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.019 08:42:55 -- common/autotest_common.sh@10 -- # set +x 00:05:18.019 ************************************ 00:05:18.019 START TEST skip_rpc 00:05:18.019 ************************************ 00:05:18.019 08:42:55 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:18.019 * Looking for test storage... 00:05:18.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:18.019 08:42:55 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:18.019 08:42:55 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:18.019 08:42:55 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:18.019 08:42:55 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:18.019 08:42:55 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.019 08:42:55 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.019 08:42:55 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.019 08:42:55 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.019 08:42:55 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.019 08:42:55 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.019 08:42:55 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.019 08:42:55 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.019 08:42:55 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.020 08:42:55 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:18.020 08:42:55 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.020 08:42:55 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:18.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.020 --rc genhtml_branch_coverage=1 00:05:18.020 --rc genhtml_function_coverage=1 00:05:18.020 --rc genhtml_legend=1 00:05:18.020 --rc geninfo_all_blocks=1 00:05:18.020 --rc geninfo_unexecuted_blocks=1 00:05:18.020 00:05:18.020 ' 00:05:18.020 08:42:55 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:18.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.020 --rc genhtml_branch_coverage=1 00:05:18.020 --rc genhtml_function_coverage=1 00:05:18.020 --rc genhtml_legend=1 00:05:18.020 --rc geninfo_all_blocks=1 00:05:18.020 --rc geninfo_unexecuted_blocks=1 00:05:18.020 00:05:18.020 ' 00:05:18.020 08:42:55 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:18.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.020 --rc genhtml_branch_coverage=1 00:05:18.020 --rc genhtml_function_coverage=1 00:05:18.020 --rc genhtml_legend=1 00:05:18.020 --rc geninfo_all_blocks=1 00:05:18.020 --rc geninfo_unexecuted_blocks=1 00:05:18.020 00:05:18.020 ' 00:05:18.020 08:42:55 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:18.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.020 --rc genhtml_branch_coverage=1 00:05:18.020 --rc genhtml_function_coverage=1 00:05:18.020 --rc genhtml_legend=1 00:05:18.020 --rc geninfo_all_blocks=1 00:05:18.020 --rc geninfo_unexecuted_blocks=1 00:05:18.020 00:05:18.020 ' 00:05:18.020 08:42:55 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:18.020 08:42:55 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:18.020 08:42:55 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:18.020 08:42:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.020 08:42:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.020 08:42:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.020 ************************************ 00:05:18.020 START TEST skip_rpc 00:05:18.020 ************************************ 00:05:18.020 08:42:55 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:18.020 08:42:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57623 00:05:18.020 08:42:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.020 08:42:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:18.020 08:42:55 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:18.279 [2024-09-28 08:42:56.094722] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:18.279 [2024-09-28 08:42:56.094927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57623 ] 00:05:18.279 [2024-09-28 08:42:56.261519] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.539 [2024-09-28 08:42:56.419904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.798 [2024-09-28 08:42:56.605034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57623 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57623 ']' 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57623 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.991 08:43:00 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57623 00:05:23.251 08:43:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.251 08:43:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.251 killing process with pid 57623 00:05:23.251 08:43:01 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57623' 00:05:23.251 08:43:01 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57623 00:05:23.251 08:43:01 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57623 00:05:25.156 00:05:25.156 real 0m6.881s 00:05:25.156 user 0m6.467s 00:05:25.156 sys 0m0.322s 00:05:25.157 ************************************ 00:05:25.157 END TEST skip_rpc 00:05:25.157 ************************************ 00:05:25.157 08:43:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.157 08:43:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.157 08:43:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:25.157 08:43:02 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.157 08:43:02 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.157 08:43:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.157 ************************************ 00:05:25.157 START TEST skip_rpc_with_json 00:05:25.157 ************************************ 00:05:25.157 08:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:25.157 08:43:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:25.157 08:43:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57721 00:05:25.157 08:43:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.157 08:43:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.157 08:43:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57721 00:05:25.157 08:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57721 ']' 00:05:25.157 08:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.157 08:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.157 08:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.157 08:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.157 08:43:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.157 [2024-09-28 08:43:03.033129] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:25.157 [2024-09-28 08:43:03.034066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57721 ] 00:05:25.416 [2024-09-28 08:43:03.209891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.416 [2024-09-28 08:43:03.365933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.675 [2024-09-28 08:43:03.559064] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.245 [2024-09-28 08:43:04.016185] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:26.245 request: 00:05:26.245 { 00:05:26.245 "trtype": "tcp", 00:05:26.245 "method": "nvmf_get_transports", 00:05:26.245 "req_id": 1 00:05:26.245 } 00:05:26.245 Got JSON-RPC error response 00:05:26.245 response: 00:05:26.245 { 00:05:26.245 "code": -19, 00:05:26.245 "message": "No such device" 00:05:26.245 } 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.245 [2024-09-28 08:43:04.028331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.245 08:43:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:26.245 { 00:05:26.245 "subsystems": [ 00:05:26.245 { 00:05:26.245 "subsystem": "fsdev", 00:05:26.245 "config": [ 00:05:26.245 { 00:05:26.245 "method": "fsdev_set_opts", 00:05:26.245 "params": { 00:05:26.245 "fsdev_io_pool_size": 65535, 00:05:26.245 "fsdev_io_cache_size": 256 00:05:26.245 } 00:05:26.245 } 00:05:26.245 ] 00:05:26.245 }, 00:05:26.245 { 00:05:26.245 "subsystem": "vfio_user_target", 00:05:26.245 "config": null 00:05:26.245 }, 00:05:26.245 { 00:05:26.245 "subsystem": "keyring", 00:05:26.245 "config": [] 00:05:26.245 }, 00:05:26.245 { 00:05:26.245 "subsystem": "iobuf", 00:05:26.245 "config": [ 00:05:26.245 { 00:05:26.245 "method": "iobuf_set_options", 00:05:26.245 "params": { 00:05:26.245 "small_pool_count": 8192, 00:05:26.245 "large_pool_count": 1024, 00:05:26.245 "small_bufsize": 8192, 00:05:26.245 "large_bufsize": 135168 00:05:26.245 } 00:05:26.245 } 00:05:26.245 ] 00:05:26.245 }, 00:05:26.245 { 00:05:26.245 "subsystem": "sock", 00:05:26.245 "config": [ 00:05:26.245 { 00:05:26.245 "method": "sock_set_default_impl", 00:05:26.245 "params": { 00:05:26.245 "impl_name": "uring" 00:05:26.245 } 00:05:26.245 }, 00:05:26.245 { 00:05:26.245 "method": "sock_impl_set_options", 00:05:26.245 "params": { 00:05:26.245 "impl_name": "ssl", 00:05:26.245 "recv_buf_size": 4096, 00:05:26.245 "send_buf_size": 4096, 00:05:26.245 "enable_recv_pipe": true, 00:05:26.245 "enable_quickack": false, 00:05:26.245 "enable_placement_id": 0, 00:05:26.245 "enable_zerocopy_send_server": true, 00:05:26.245 "enable_zerocopy_send_client": false, 00:05:26.245 "zerocopy_threshold": 0, 00:05:26.245 "tls_version": 0, 00:05:26.245 "enable_ktls": false 00:05:26.245 } 00:05:26.245 }, 00:05:26.245 { 00:05:26.245 "method": "sock_impl_set_options", 00:05:26.245 "params": { 00:05:26.245 "impl_name": "posix", 00:05:26.245 "recv_buf_size": 2097152, 00:05:26.245 "send_buf_size": 2097152, 00:05:26.245 "enable_recv_pipe": true, 00:05:26.245 "enable_quickack": false, 00:05:26.245 "enable_placement_id": 0, 00:05:26.245 "enable_zerocopy_send_server": true, 00:05:26.245 "enable_zerocopy_send_client": false, 00:05:26.245 "zerocopy_threshold": 0, 00:05:26.245 "tls_version": 0, 00:05:26.245 "enable_ktls": false 00:05:26.245 } 00:05:26.245 }, 00:05:26.245 { 00:05:26.245 "method": "sock_impl_set_options", 00:05:26.245 "params": { 00:05:26.245 "impl_name": "uring", 00:05:26.245 "recv_buf_size": 2097152, 00:05:26.245 "send_buf_size": 2097152, 00:05:26.245 "enable_recv_pipe": true, 00:05:26.245 "enable_quickack": false, 00:05:26.245 "enable_placement_id": 0, 00:05:26.245 "enable_zerocopy_send_server": false, 00:05:26.245 "enable_zerocopy_send_client": false, 00:05:26.245 "zerocopy_threshold": 0, 00:05:26.245 "tls_version": 0, 00:05:26.245 "enable_ktls": false 00:05:26.245 } 00:05:26.245 } 00:05:26.245 ] 00:05:26.245 }, 00:05:26.245 { 00:05:26.245 "subsystem": "vmd", 00:05:26.245 "config": [] 00:05:26.245 }, 00:05:26.245 { 00:05:26.245 "subsystem": "accel", 00:05:26.245 "config": [ 00:05:26.245 { 00:05:26.245 "method": "accel_set_options", 00:05:26.245 "params": { 00:05:26.245 "small_cache_size": 128, 00:05:26.245 "large_cache_size": 16, 00:05:26.245 "task_count": 2048, 00:05:26.245 "sequence_count": 2048, 00:05:26.245 "buf_count": 2048 00:05:26.245 } 00:05:26.245 } 00:05:26.245 ] 00:05:26.245 }, 00:05:26.245 { 00:05:26.245 "subsystem": "bdev", 00:05:26.245 "config": [ 00:05:26.245 { 00:05:26.245 "method": "bdev_set_options", 00:05:26.245 "params": { 00:05:26.245 "bdev_io_pool_size": 65535, 00:05:26.245 "bdev_io_cache_size": 256, 00:05:26.245 "bdev_auto_examine": true, 00:05:26.245 "iobuf_small_cache_size": 128, 00:05:26.245 "iobuf_large_cache_size": 16 00:05:26.245 } 00:05:26.245 }, 00:05:26.245 { 00:05:26.245 "method": "bdev_raid_set_options", 00:05:26.245 "params": { 00:05:26.245 "process_window_size_kb": 1024, 00:05:26.245 "process_max_bandwidth_mb_sec": 0 00:05:26.245 } 00:05:26.245 }, 00:05:26.245 { 00:05:26.245 "method": "bdev_iscsi_set_options", 00:05:26.245 "params": { 00:05:26.245 "timeout_sec": 30 00:05:26.245 } 00:05:26.245 }, 00:05:26.245 { 00:05:26.245 "method": "bdev_nvme_set_options", 00:05:26.245 "params": { 00:05:26.246 "action_on_timeout": "none", 00:05:26.246 "timeout_us": 0, 00:05:26.246 "timeout_admin_us": 0, 00:05:26.246 "keep_alive_timeout_ms": 10000, 00:05:26.246 "arbitration_burst": 0, 00:05:26.246 "low_priority_weight": 0, 00:05:26.246 "medium_priority_weight": 0, 00:05:26.246 "high_priority_weight": 0, 00:05:26.246 "nvme_adminq_poll_period_us": 10000, 00:05:26.246 "nvme_ioq_poll_period_us": 0, 00:05:26.246 "io_queue_requests": 0, 00:05:26.246 "delay_cmd_submit": true, 00:05:26.246 "transport_retry_count": 4, 00:05:26.246 "bdev_retry_count": 3, 00:05:26.246 "transport_ack_timeout": 0, 00:05:26.246 "ctrlr_loss_timeout_sec": 0, 00:05:26.246 "reconnect_delay_sec": 0, 00:05:26.246 "fast_io_fail_timeout_sec": 0, 00:05:26.246 "disable_auto_failback": false, 00:05:26.246 "generate_uuids": false, 00:05:26.246 "transport_tos": 0, 00:05:26.246 "nvme_error_stat": false, 00:05:26.246 "rdma_srq_size": 0, 00:05:26.246 "io_path_stat": false, 00:05:26.246 "allow_accel_sequence": false, 00:05:26.246 "rdma_max_cq_size": 0, 00:05:26.246 "rdma_cm_event_timeout_ms": 0, 00:05:26.246 "dhchap_digests": [ 00:05:26.246 "sha256", 00:05:26.246 "sha384", 00:05:26.246 "sha512" 00:05:26.246 ], 00:05:26.246 "dhchap_dhgroups": [ 00:05:26.246 "null", 00:05:26.246 "ffdhe2048", 00:05:26.246 "ffdhe3072", 00:05:26.246 "ffdhe4096", 00:05:26.246 "ffdhe6144", 00:05:26.246 "ffdhe8192" 00:05:26.246 ] 00:05:26.246 } 00:05:26.246 }, 00:05:26.246 { 00:05:26.246 "method": "bdev_nvme_set_hotplug", 00:05:26.246 "params": { 00:05:26.246 "period_us": 100000, 00:05:26.246 "enable": false 00:05:26.246 } 00:05:26.246 }, 00:05:26.246 { 00:05:26.246 "method": "bdev_wait_for_examine" 00:05:26.246 } 00:05:26.246 ] 00:05:26.246 }, 00:05:26.246 { 00:05:26.246 "subsystem": "scsi", 00:05:26.246 "config": null 00:05:26.246 }, 00:05:26.246 { 00:05:26.246 "subsystem": "scheduler", 00:05:26.246 "config": [ 00:05:26.246 { 00:05:26.246 "method": "framework_set_scheduler", 00:05:26.246 "params": { 00:05:26.246 "name": "static" 00:05:26.246 } 00:05:26.246 } 00:05:26.246 ] 00:05:26.246 }, 00:05:26.246 { 00:05:26.246 "subsystem": "vhost_scsi", 00:05:26.246 "config": [] 00:05:26.246 }, 00:05:26.246 { 00:05:26.246 "subsystem": "vhost_blk", 00:05:26.246 "config": [] 00:05:26.246 }, 00:05:26.246 { 00:05:26.246 "subsystem": "ublk", 00:05:26.246 "config": [] 00:05:26.246 }, 00:05:26.246 { 00:05:26.246 "subsystem": "nbd", 00:05:26.246 "config": [] 00:05:26.246 }, 00:05:26.246 { 00:05:26.246 "subsystem": "nvmf", 00:05:26.246 "config": [ 00:05:26.246 { 00:05:26.246 "method": "nvmf_set_config", 00:05:26.246 "params": { 00:05:26.246 "discovery_filter": "match_any", 00:05:26.246 "admin_cmd_passthru": { 00:05:26.246 "identify_ctrlr": false 00:05:26.246 }, 00:05:26.246 "dhchap_digests": [ 00:05:26.246 "sha256", 00:05:26.246 "sha384", 00:05:26.246 "sha512" 00:05:26.246 ], 00:05:26.246 "dhchap_dhgroups": [ 00:05:26.246 "null", 00:05:26.246 "ffdhe2048", 00:05:26.246 "ffdhe3072", 00:05:26.246 "ffdhe4096", 00:05:26.246 "ffdhe6144", 00:05:26.246 "ffdhe8192" 00:05:26.246 ] 00:05:26.246 } 00:05:26.246 }, 00:05:26.246 { 00:05:26.246 "method": "nvmf_set_max_subsystems", 00:05:26.246 "params": { 00:05:26.246 "max_subsystems": 1024 00:05:26.246 } 00:05:26.246 }, 00:05:26.246 { 00:05:26.246 "method": "nvmf_set_crdt", 00:05:26.246 "params": { 00:05:26.246 "crdt1": 0, 00:05:26.246 "crdt2": 0, 00:05:26.246 "crdt3": 0 00:05:26.246 } 00:05:26.246 }, 00:05:26.246 { 00:05:26.246 "method": "nvmf_create_transport", 00:05:26.246 "params": { 00:05:26.246 "trtype": "TCP", 00:05:26.246 "max_queue_depth": 128, 00:05:26.246 "max_io_qpairs_per_ctrlr": 127, 00:05:26.246 "in_capsule_data_size": 4096, 00:05:26.246 "max_io_size": 131072, 00:05:26.246 "io_unit_size": 131072, 00:05:26.246 "max_aq_depth": 128, 00:05:26.246 "num_shared_buffers": 511, 00:05:26.246 "buf_cache_size": 4294967295, 00:05:26.246 "dif_insert_or_strip": false, 00:05:26.246 "zcopy": false, 00:05:26.246 "c2h_success": true, 00:05:26.246 "sock_priority": 0, 00:05:26.246 "abort_timeout_sec": 1, 00:05:26.246 "ack_timeout": 0, 00:05:26.246 "data_wr_pool_size": 0 00:05:26.246 } 00:05:26.246 } 00:05:26.246 ] 00:05:26.246 }, 00:05:26.246 { 00:05:26.246 "subsystem": "iscsi", 00:05:26.246 "config": [ 00:05:26.246 { 00:05:26.246 "method": "iscsi_set_options", 00:05:26.246 "params": { 00:05:26.246 "node_base": "iqn.2016-06.io.spdk", 00:05:26.246 "max_sessions": 128, 00:05:26.246 "max_connections_per_session": 2, 00:05:26.246 "max_queue_depth": 64, 00:05:26.246 "default_time2wait": 2, 00:05:26.246 "default_time2retain": 20, 00:05:26.246 "first_burst_length": 8192, 00:05:26.246 "immediate_data": true, 00:05:26.246 "allow_duplicated_isid": false, 00:05:26.246 "error_recovery_level": 0, 00:05:26.246 "nop_timeout": 60, 00:05:26.246 "nop_in_interval": 30, 00:05:26.246 "disable_chap": false, 00:05:26.246 "require_chap": false, 00:05:26.246 "mutual_chap": false, 00:05:26.246 "chap_group": 0, 00:05:26.246 "max_large_datain_per_connection": 64, 00:05:26.246 "max_r2t_per_connection": 4, 00:05:26.246 "pdu_pool_size": 36864, 00:05:26.246 "immediate_data_pool_size": 16384, 00:05:26.246 "data_out_pool_size": 2048 00:05:26.246 } 00:05:26.246 } 00:05:26.246 ] 00:05:26.246 } 00:05:26.246 ] 00:05:26.246 } 00:05:26.246 08:43:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:26.247 08:43:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57721 00:05:26.247 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57721 ']' 00:05:26.247 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57721 00:05:26.247 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:26.247 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.247 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57721 00:05:26.506 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.506 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.506 killing process with pid 57721 00:05:26.506 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57721' 00:05:26.506 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57721 00:05:26.506 08:43:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57721 00:05:28.412 08:43:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57771 00:05:28.412 08:43:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:28.412 08:43:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.683 08:43:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57771 00:05:33.684 08:43:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57771 ']' 00:05:33.684 08:43:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57771 00:05:33.684 08:43:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:33.684 08:43:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.684 08:43:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57771 00:05:33.684 08:43:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.684 08:43:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.684 killing process with pid 57771 00:05:33.684 08:43:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57771' 00:05:33.684 08:43:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57771 00:05:33.684 08:43:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57771 00:05:35.060 08:43:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:35.060 08:43:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:35.060 00:05:35.060 real 0m10.126s 00:05:35.060 user 0m9.766s 00:05:35.060 sys 0m0.700s 00:05:35.060 08:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.060 ************************************ 00:05:35.060 END TEST skip_rpc_with_json 00:05:35.060 08:43:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:35.060 ************************************ 00:05:35.319 08:43:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:35.319 08:43:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.319 08:43:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.319 08:43:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.319 ************************************ 00:05:35.319 START TEST skip_rpc_with_delay 00:05:35.319 ************************************ 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.319 [2024-09-28 08:43:13.215719] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:35.319 [2024-09-28 08:43:13.215891] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.319 00:05:35.319 real 0m0.201s 00:05:35.319 user 0m0.110s 00:05:35.319 sys 0m0.089s 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.319 08:43:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:35.319 ************************************ 00:05:35.319 END TEST skip_rpc_with_delay 00:05:35.319 ************************************ 00:05:35.580 08:43:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:35.580 08:43:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:35.580 08:43:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:35.580 08:43:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:35.580 08:43:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:35.580 08:43:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.580 ************************************ 00:05:35.580 START TEST exit_on_failed_rpc_init 00:05:35.580 ************************************ 00:05:35.580 08:43:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:35.580 08:43:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57900 00:05:35.580 08:43:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57900 00:05:35.580 08:43:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57900 ']' 00:05:35.580 08:43:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.580 08:43:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.580 08:43:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.580 08:43:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.580 08:43:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.580 08:43:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:35.580 [2024-09-28 08:43:13.454541] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:35.580 [2024-09-28 08:43:13.454682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57900 ] 00:05:35.839 [2024-09-28 08:43:13.610717] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.839 [2024-09-28 08:43:13.764978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.099 [2024-09-28 08:43:13.966746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:36.668 08:43:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:36.668 [2024-09-28 08:43:14.618993] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:36.668 [2024-09-28 08:43:14.619181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57918 ] 00:05:36.927 [2024-09-28 08:43:14.784242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.186 [2024-09-28 08:43:14.993731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.186 [2024-09-28 08:43:14.993900] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:37.186 [2024-09-28 08:43:14.993926] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:37.186 [2024-09-28 08:43:14.993942] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57900 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57900 ']' 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57900 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57900 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:37.446 killing process with pid 57900 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57900' 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57900 00:05:37.446 08:43:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57900 00:05:39.370 00:05:39.370 real 0m3.997s 00:05:39.370 user 0m4.772s 00:05:39.370 sys 0m0.537s 00:05:39.370 08:43:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.370 08:43:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:39.370 ************************************ 00:05:39.370 END TEST exit_on_failed_rpc_init 00:05:39.370 ************************************ 00:05:39.638 08:43:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:39.638 00:05:39.638 real 0m21.614s 00:05:39.638 user 0m21.310s 00:05:39.638 sys 0m1.842s 00:05:39.638 08:43:17 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.638 08:43:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.638 ************************************ 00:05:39.638 END TEST skip_rpc 00:05:39.638 ************************************ 00:05:39.638 08:43:17 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:39.638 08:43:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.638 08:43:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.638 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:05:39.638 ************************************ 00:05:39.638 START TEST rpc_client 00:05:39.638 ************************************ 00:05:39.638 08:43:17 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:39.638 * Looking for test storage... 00:05:39.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:39.638 08:43:17 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:39.638 08:43:17 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:39.638 08:43:17 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:39.638 08:43:17 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.638 08:43:17 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:39.638 08:43:17 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.638 08:43:17 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.638 --rc genhtml_branch_coverage=1 00:05:39.638 --rc genhtml_function_coverage=1 00:05:39.638 --rc genhtml_legend=1 00:05:39.638 --rc geninfo_all_blocks=1 00:05:39.638 --rc geninfo_unexecuted_blocks=1 00:05:39.638 00:05:39.638 ' 00:05:39.638 08:43:17 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.638 --rc genhtml_branch_coverage=1 00:05:39.638 --rc genhtml_function_coverage=1 00:05:39.638 --rc genhtml_legend=1 00:05:39.638 --rc geninfo_all_blocks=1 00:05:39.638 --rc geninfo_unexecuted_blocks=1 00:05:39.638 00:05:39.638 ' 00:05:39.638 08:43:17 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.638 --rc genhtml_branch_coverage=1 00:05:39.638 --rc genhtml_function_coverage=1 00:05:39.638 --rc genhtml_legend=1 00:05:39.638 --rc geninfo_all_blocks=1 00:05:39.638 --rc geninfo_unexecuted_blocks=1 00:05:39.638 00:05:39.638 ' 00:05:39.638 08:43:17 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:39.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.638 --rc genhtml_branch_coverage=1 00:05:39.638 --rc genhtml_function_coverage=1 00:05:39.638 --rc genhtml_legend=1 00:05:39.638 --rc geninfo_all_blocks=1 00:05:39.638 --rc geninfo_unexecuted_blocks=1 00:05:39.638 00:05:39.638 ' 00:05:39.638 08:43:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:39.898 OK 00:05:39.898 08:43:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:39.898 00:05:39.898 real 0m0.253s 00:05:39.898 user 0m0.153s 00:05:39.898 sys 0m0.111s 00:05:39.898 08:43:17 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.898 08:43:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:39.898 ************************************ 00:05:39.898 END TEST rpc_client 00:05:39.898 ************************************ 00:05:39.898 08:43:17 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:39.898 08:43:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.898 08:43:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.898 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:05:39.898 ************************************ 00:05:39.898 START TEST json_config 00:05:39.898 ************************************ 00:05:39.898 08:43:17 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:39.898 08:43:17 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:39.898 08:43:17 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:39.898 08:43:17 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:39.898 08:43:17 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:39.898 08:43:17 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.898 08:43:17 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.898 08:43:17 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.898 08:43:17 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.898 08:43:17 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.898 08:43:17 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.898 08:43:17 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.898 08:43:17 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.898 08:43:17 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.898 08:43:17 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.898 08:43:17 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.898 08:43:17 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:39.898 08:43:17 json_config -- scripts/common.sh@345 -- # : 1 00:05:39.898 08:43:17 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.898 08:43:17 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.898 08:43:17 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:39.898 08:43:17 json_config -- scripts/common.sh@353 -- # local d=1 00:05:39.898 08:43:17 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.898 08:43:17 json_config -- scripts/common.sh@355 -- # echo 1 00:05:39.898 08:43:17 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.898 08:43:17 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:39.898 08:43:17 json_config -- scripts/common.sh@353 -- # local d=2 00:05:39.898 08:43:17 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.898 08:43:17 json_config -- scripts/common.sh@355 -- # echo 2 00:05:40.158 08:43:17 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.158 08:43:17 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.158 08:43:17 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.158 08:43:17 json_config -- scripts/common.sh@368 -- # return 0 00:05:40.158 08:43:17 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.158 08:43:17 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:40.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.158 --rc genhtml_branch_coverage=1 00:05:40.158 --rc genhtml_function_coverage=1 00:05:40.158 --rc genhtml_legend=1 00:05:40.158 --rc geninfo_all_blocks=1 00:05:40.158 --rc geninfo_unexecuted_blocks=1 00:05:40.158 00:05:40.158 ' 00:05:40.158 08:43:17 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:40.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.158 --rc genhtml_branch_coverage=1 00:05:40.158 --rc genhtml_function_coverage=1 00:05:40.158 --rc genhtml_legend=1 00:05:40.158 --rc geninfo_all_blocks=1 00:05:40.158 --rc geninfo_unexecuted_blocks=1 00:05:40.158 00:05:40.158 ' 00:05:40.158 08:43:17 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:40.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.158 --rc genhtml_branch_coverage=1 00:05:40.158 --rc genhtml_function_coverage=1 00:05:40.158 --rc genhtml_legend=1 00:05:40.158 --rc geninfo_all_blocks=1 00:05:40.158 --rc geninfo_unexecuted_blocks=1 00:05:40.158 00:05:40.158 ' 00:05:40.158 08:43:17 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:40.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.158 --rc genhtml_branch_coverage=1 00:05:40.158 --rc genhtml_function_coverage=1 00:05:40.158 --rc genhtml_legend=1 00:05:40.158 --rc geninfo_all_blocks=1 00:05:40.158 --rc geninfo_unexecuted_blocks=1 00:05:40.158 00:05:40.158 ' 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:40.158 08:43:17 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.158 08:43:17 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.158 08:43:17 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.158 08:43:17 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.158 08:43:17 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.158 08:43:17 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.158 08:43:17 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.158 08:43:17 json_config -- paths/export.sh@5 -- # export PATH 00:05:40.158 08:43:17 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@51 -- # : 0 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:40.158 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:40.158 08:43:17 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:40.158 08:43:17 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:40.159 08:43:17 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:40.159 INFO: JSON configuration test init 00:05:40.159 08:43:17 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:40.159 08:43:17 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:40.159 08:43:17 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:40.159 08:43:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:40.159 08:43:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.159 08:43:17 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:40.159 08:43:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:40.159 08:43:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.159 08:43:17 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:40.159 08:43:17 json_config -- json_config/common.sh@9 -- # local app=target 00:05:40.159 08:43:17 json_config -- json_config/common.sh@10 -- # shift 00:05:40.159 08:43:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.159 08:43:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.159 08:43:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.159 08:43:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.159 08:43:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.159 08:43:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58077 00:05:40.159 Waiting for target to run... 00:05:40.159 08:43:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.159 08:43:17 json_config -- json_config/common.sh@25 -- # waitforlisten 58077 /var/tmp/spdk_tgt.sock 00:05:40.159 08:43:17 json_config -- common/autotest_common.sh@831 -- # '[' -z 58077 ']' 00:05:40.159 08:43:17 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.159 08:43:17 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.159 08:43:17 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:40.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.159 08:43:17 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.159 08:43:17 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.159 08:43:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.159 [2024-09-28 08:43:18.060877] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:40.159 [2024-09-28 08:43:18.061096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58077 ] 00:05:40.418 [2024-09-28 08:43:18.401445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.677 [2024-09-28 08:43:18.550799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.246 00:05:41.246 08:43:19 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.246 08:43:19 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:41.246 08:43:19 json_config -- json_config/common.sh@26 -- # echo '' 00:05:41.246 08:43:19 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:41.246 08:43:19 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:41.246 08:43:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.246 08:43:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.246 08:43:19 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:41.246 08:43:19 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:41.246 08:43:19 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:41.246 08:43:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.246 08:43:19 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:41.246 08:43:19 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:41.246 08:43:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:41.506 [2024-09-28 08:43:19.484878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:42.075 08:43:19 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:42.075 08:43:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:42.075 08:43:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.075 08:43:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.075 08:43:20 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:42.075 08:43:20 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:42.075 08:43:20 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:42.075 08:43:20 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:42.075 08:43:20 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:42.075 08:43:20 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:42.075 08:43:20 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:42.075 08:43:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:42.333 08:43:20 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:42.333 08:43:20 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:42.333 08:43:20 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:42.333 08:43:20 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:42.333 08:43:20 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:42.333 08:43:20 json_config -- json_config/json_config.sh@54 -- # sort 00:05:42.333 08:43:20 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:42.333 08:43:20 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:42.333 08:43:20 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:42.333 08:43:20 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:42.333 08:43:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.333 08:43:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.592 08:43:20 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:42.592 08:43:20 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:42.592 08:43:20 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:42.592 08:43:20 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:42.592 08:43:20 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:42.592 08:43:20 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:42.592 08:43:20 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:42.592 08:43:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.592 08:43:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.592 08:43:20 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:42.592 08:43:20 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:42.592 08:43:20 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:42.592 08:43:20 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:42.592 08:43:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:42.851 MallocForNvmf0 00:05:42.851 08:43:20 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:42.851 08:43:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:43.110 MallocForNvmf1 00:05:43.110 08:43:20 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:43.110 08:43:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:43.370 [2024-09-28 08:43:21.203184] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:43.370 08:43:21 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:43.370 08:43:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:43.629 08:43:21 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:43.629 08:43:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:43.887 08:43:21 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:43.887 08:43:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:44.146 08:43:21 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:44.147 08:43:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:44.405 [2024-09-28 08:43:22.248054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:44.405 08:43:22 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:44.405 08:43:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:44.405 08:43:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.405 08:43:22 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:44.405 08:43:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:44.405 08:43:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.405 08:43:22 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:44.405 08:43:22 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:44.405 08:43:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:44.691 MallocBdevForConfigChangeCheck 00:05:44.691 08:43:22 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:44.691 08:43:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:44.691 08:43:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.691 08:43:22 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:44.691 08:43:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.258 INFO: shutting down applications... 00:05:45.258 08:43:23 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:45.258 08:43:23 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:45.258 08:43:23 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:45.258 08:43:23 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:45.258 08:43:23 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:45.517 Calling clear_iscsi_subsystem 00:05:45.517 Calling clear_nvmf_subsystem 00:05:45.517 Calling clear_nbd_subsystem 00:05:45.517 Calling clear_ublk_subsystem 00:05:45.517 Calling clear_vhost_blk_subsystem 00:05:45.517 Calling clear_vhost_scsi_subsystem 00:05:45.517 Calling clear_bdev_subsystem 00:05:45.517 08:43:23 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:45.517 08:43:23 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:45.517 08:43:23 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:45.517 08:43:23 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.517 08:43:23 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:45.517 08:43:23 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:46.084 08:43:23 json_config -- json_config/json_config.sh@352 -- # break 00:05:46.084 08:43:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:46.084 08:43:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:46.084 08:43:23 json_config -- json_config/common.sh@31 -- # local app=target 00:05:46.084 08:43:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:46.084 08:43:23 json_config -- json_config/common.sh@35 -- # [[ -n 58077 ]] 00:05:46.084 08:43:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58077 00:05:46.084 08:43:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:46.084 08:43:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.084 08:43:23 json_config -- json_config/common.sh@41 -- # kill -0 58077 00:05:46.084 08:43:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.344 08:43:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.344 08:43:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.344 08:43:24 json_config -- json_config/common.sh@41 -- # kill -0 58077 00:05:46.344 08:43:24 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.913 08:43:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.913 08:43:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.913 08:43:24 json_config -- json_config/common.sh@41 -- # kill -0 58077 00:05:46.913 08:43:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:46.913 08:43:24 json_config -- json_config/common.sh@43 -- # break 00:05:46.913 SPDK target shutdown done 00:05:46.913 08:43:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:46.913 08:43:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:46.913 INFO: relaunching applications... 00:05:46.913 08:43:24 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:46.913 08:43:24 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:46.913 08:43:24 json_config -- json_config/common.sh@9 -- # local app=target 00:05:46.913 08:43:24 json_config -- json_config/common.sh@10 -- # shift 00:05:46.913 08:43:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.913 08:43:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.913 08:43:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.913 08:43:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.913 08:43:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.913 08:43:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58291 00:05:46.913 08:43:24 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:46.913 08:43:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.913 Waiting for target to run... 00:05:46.913 08:43:24 json_config -- json_config/common.sh@25 -- # waitforlisten 58291 /var/tmp/spdk_tgt.sock 00:05:46.913 08:43:24 json_config -- common/autotest_common.sh@831 -- # '[' -z 58291 ']' 00:05:46.913 08:43:24 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.913 08:43:24 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.913 08:43:24 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.913 08:43:24 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.913 08:43:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.172 [2024-09-28 08:43:24.928482] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:47.172 [2024-09-28 08:43:24.928878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58291 ] 00:05:47.431 [2024-09-28 08:43:25.232482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.431 [2024-09-28 08:43:25.391985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.001 [2024-09-28 08:43:25.686814] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.261 [2024-09-28 08:43:26.239326] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.520 [2024-09-28 08:43:26.271483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:48.520 00:05:48.520 INFO: Checking if target configuration is the same... 00:05:48.520 08:43:26 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.520 08:43:26 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:48.520 08:43:26 json_config -- json_config/common.sh@26 -- # echo '' 00:05:48.520 08:43:26 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:48.520 08:43:26 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:48.520 08:43:26 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.520 08:43:26 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:48.520 08:43:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.520 + '[' 2 -ne 2 ']' 00:05:48.520 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:48.520 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:48.520 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:48.520 +++ basename /dev/fd/62 00:05:48.520 ++ mktemp /tmp/62.XXX 00:05:48.520 + tmp_file_1=/tmp/62.3e2 00:05:48.520 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.520 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:48.520 + tmp_file_2=/tmp/spdk_tgt_config.json.aeH 00:05:48.520 + ret=0 00:05:48.520 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:48.778 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:49.037 + diff -u /tmp/62.3e2 /tmp/spdk_tgt_config.json.aeH 00:05:49.037 INFO: JSON config files are the same 00:05:49.037 + echo 'INFO: JSON config files are the same' 00:05:49.037 + rm /tmp/62.3e2 /tmp/spdk_tgt_config.json.aeH 00:05:49.037 + exit 0 00:05:49.037 INFO: changing configuration and checking if this can be detected... 00:05:49.037 08:43:26 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:49.037 08:43:26 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:49.037 08:43:26 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:49.037 08:43:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:49.296 08:43:27 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:49.296 08:43:27 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:49.296 08:43:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:49.296 + '[' 2 -ne 2 ']' 00:05:49.296 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:49.296 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:49.296 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:49.296 +++ basename /dev/fd/62 00:05:49.296 ++ mktemp /tmp/62.XXX 00:05:49.296 + tmp_file_1=/tmp/62.l8n 00:05:49.296 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:49.296 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:49.296 + tmp_file_2=/tmp/spdk_tgt_config.json.jmm 00:05:49.296 + ret=0 00:05:49.296 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:49.554 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:49.814 + diff -u /tmp/62.l8n /tmp/spdk_tgt_config.json.jmm 00:05:49.814 + ret=1 00:05:49.814 + echo '=== Start of file: /tmp/62.l8n ===' 00:05:49.814 + cat /tmp/62.l8n 00:05:49.814 + echo '=== End of file: /tmp/62.l8n ===' 00:05:49.814 + echo '' 00:05:49.814 + echo '=== Start of file: /tmp/spdk_tgt_config.json.jmm ===' 00:05:49.814 + cat /tmp/spdk_tgt_config.json.jmm 00:05:49.814 + echo '=== End of file: /tmp/spdk_tgt_config.json.jmm ===' 00:05:49.814 + echo '' 00:05:49.814 + rm /tmp/62.l8n /tmp/spdk_tgt_config.json.jmm 00:05:49.814 + exit 1 00:05:49.814 INFO: configuration change detected. 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@324 -- # [[ -n 58291 ]] 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.814 08:43:27 json_config -- json_config/json_config.sh@330 -- # killprocess 58291 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@950 -- # '[' -z 58291 ']' 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@954 -- # kill -0 58291 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@955 -- # uname 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58291 00:05:49.814 killing process with pid 58291 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58291' 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@969 -- # kill 58291 00:05:49.814 08:43:27 json_config -- common/autotest_common.sh@974 -- # wait 58291 00:05:50.752 08:43:28 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:50.752 08:43:28 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:50.752 08:43:28 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:50.752 08:43:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.752 08:43:28 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:50.752 INFO: Success 00:05:50.752 08:43:28 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:50.752 ************************************ 00:05:50.752 END TEST json_config 00:05:50.752 ************************************ 00:05:50.752 00:05:50.752 real 0m10.810s 00:05:50.752 user 0m14.793s 00:05:50.752 sys 0m1.745s 00:05:50.752 08:43:28 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.752 08:43:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.752 08:43:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:50.752 08:43:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.752 08:43:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.752 08:43:28 -- common/autotest_common.sh@10 -- # set +x 00:05:50.752 ************************************ 00:05:50.752 START TEST json_config_extra_key 00:05:50.752 ************************************ 00:05:50.752 08:43:28 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:50.752 08:43:28 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:50.752 08:43:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:50.752 08:43:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:51.014 08:43:28 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.014 08:43:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:51.014 08:43:28 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.014 08:43:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:51.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.014 --rc genhtml_branch_coverage=1 00:05:51.014 --rc genhtml_function_coverage=1 00:05:51.014 --rc genhtml_legend=1 00:05:51.014 --rc geninfo_all_blocks=1 00:05:51.014 --rc geninfo_unexecuted_blocks=1 00:05:51.014 00:05:51.014 ' 00:05:51.014 08:43:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:51.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.014 --rc genhtml_branch_coverage=1 00:05:51.014 --rc genhtml_function_coverage=1 00:05:51.014 --rc genhtml_legend=1 00:05:51.014 --rc geninfo_all_blocks=1 00:05:51.014 --rc geninfo_unexecuted_blocks=1 00:05:51.014 00:05:51.014 ' 00:05:51.014 08:43:28 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:51.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.014 --rc genhtml_branch_coverage=1 00:05:51.014 --rc genhtml_function_coverage=1 00:05:51.014 --rc genhtml_legend=1 00:05:51.014 --rc geninfo_all_blocks=1 00:05:51.014 --rc geninfo_unexecuted_blocks=1 00:05:51.014 00:05:51.014 ' 00:05:51.014 08:43:28 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:51.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.014 --rc genhtml_branch_coverage=1 00:05:51.014 --rc genhtml_function_coverage=1 00:05:51.014 --rc genhtml_legend=1 00:05:51.014 --rc geninfo_all_blocks=1 00:05:51.014 --rc geninfo_unexecuted_blocks=1 00:05:51.014 00:05:51.014 ' 00:05:51.014 08:43:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:51.014 08:43:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:51.014 08:43:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.014 08:43:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.014 08:43:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.014 08:43:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.014 08:43:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.014 08:43:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.014 08:43:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.014 08:43:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.014 08:43:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.014 08:43:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.014 08:43:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:51.015 08:43:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:51.015 08:43:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.015 08:43:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.015 08:43:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.015 08:43:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.015 08:43:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.015 08:43:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.015 08:43:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:51.015 08:43:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:51.015 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:51.015 08:43:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:51.015 08:43:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:51.015 08:43:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:51.015 08:43:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:51.015 08:43:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:51.015 08:43:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:51.015 08:43:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:51.015 08:43:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:51.015 08:43:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:51.015 08:43:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:51.015 08:43:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:51.015 08:43:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:51.015 INFO: launching applications... 00:05:51.015 08:43:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:51.015 08:43:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:51.015 08:43:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:51.015 08:43:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:51.015 08:43:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:51.015 08:43:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:51.015 08:43:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.015 08:43:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:51.015 08:43:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58457 00:05:51.015 Waiting for target to run... 00:05:51.015 08:43:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:51.015 08:43:28 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:51.015 08:43:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58457 /var/tmp/spdk_tgt.sock 00:05:51.015 08:43:28 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58457 ']' 00:05:51.015 08:43:28 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.015 08:43:28 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.015 08:43:28 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.015 08:43:28 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.015 08:43:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:51.015 [2024-09-28 08:43:28.942226] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:51.015 [2024-09-28 08:43:28.942418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58457 ] 00:05:51.589 [2024-09-28 08:43:29.281684] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.589 [2024-09-28 08:43:29.422278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.849 [2024-09-28 08:43:29.614664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.108 00:05:52.109 INFO: shutting down applications... 00:05:52.109 08:43:30 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.109 08:43:30 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:52.109 08:43:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:52.109 08:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:52.109 08:43:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:52.109 08:43:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:52.109 08:43:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:52.109 08:43:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58457 ]] 00:05:52.109 08:43:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58457 00:05:52.109 08:43:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:52.109 08:43:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.109 08:43:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58457 00:05:52.109 08:43:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.678 08:43:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.678 08:43:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.678 08:43:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58457 00:05:52.678 08:43:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.247 08:43:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.247 08:43:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.247 08:43:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58457 00:05:53.247 08:43:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.815 08:43:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.815 08:43:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.815 08:43:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58457 00:05:53.815 08:43:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.074 08:43:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.074 08:43:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.074 08:43:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58457 00:05:54.074 08:43:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:54.641 08:43:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:54.641 08:43:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.641 08:43:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58457 00:05:54.641 08:43:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:54.641 08:43:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:54.641 SPDK target shutdown done 00:05:54.641 08:43:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:54.641 08:43:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:54.641 Success 00:05:54.641 08:43:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:54.641 ************************************ 00:05:54.641 END TEST json_config_extra_key 00:05:54.641 ************************************ 00:05:54.641 00:05:54.641 real 0m3.933s 00:05:54.641 user 0m3.495s 00:05:54.641 sys 0m0.511s 00:05:54.641 08:43:32 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.641 08:43:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:54.641 08:43:32 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.641 08:43:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.641 08:43:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.641 08:43:32 -- common/autotest_common.sh@10 -- # set +x 00:05:54.641 ************************************ 00:05:54.641 START TEST alias_rpc 00:05:54.641 ************************************ 00:05:54.641 08:43:32 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:54.901 * Looking for test storage... 00:05:54.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.901 08:43:32 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:54.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:54.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.901 --rc genhtml_branch_coverage=1 00:05:54.901 --rc genhtml_function_coverage=1 00:05:54.901 --rc genhtml_legend=1 00:05:54.901 --rc geninfo_all_blocks=1 00:05:54.901 --rc geninfo_unexecuted_blocks=1 00:05:54.901 00:05:54.901 ' 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:54.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.901 --rc genhtml_branch_coverage=1 00:05:54.901 --rc genhtml_function_coverage=1 00:05:54.901 --rc genhtml_legend=1 00:05:54.901 --rc geninfo_all_blocks=1 00:05:54.901 --rc geninfo_unexecuted_blocks=1 00:05:54.901 00:05:54.901 ' 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:54.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.901 --rc genhtml_branch_coverage=1 00:05:54.901 --rc genhtml_function_coverage=1 00:05:54.901 --rc genhtml_legend=1 00:05:54.901 --rc geninfo_all_blocks=1 00:05:54.901 --rc geninfo_unexecuted_blocks=1 00:05:54.901 00:05:54.901 ' 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:54.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.901 --rc genhtml_branch_coverage=1 00:05:54.901 --rc genhtml_function_coverage=1 00:05:54.901 --rc genhtml_legend=1 00:05:54.901 --rc geninfo_all_blocks=1 00:05:54.901 --rc geninfo_unexecuted_blocks=1 00:05:54.901 00:05:54.901 ' 00:05:54.901 08:43:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:54.901 08:43:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58562 00:05:54.901 08:43:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58562 00:05:54.901 08:43:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58562 ']' 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.901 08:43:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.160 [2024-09-28 08:43:32.918296] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:55.160 [2024-09-28 08:43:32.919405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58562 ] 00:05:55.160 [2024-09-28 08:43:33.087863] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.418 [2024-09-28 08:43:33.245131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.677 [2024-09-28 08:43:33.436352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.243 08:43:33 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.243 08:43:33 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:56.244 08:43:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:56.244 08:43:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58562 00:05:56.244 08:43:34 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58562 ']' 00:05:56.244 08:43:34 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58562 00:05:56.244 08:43:34 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:56.244 08:43:34 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.244 08:43:34 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58562 00:05:56.502 killing process with pid 58562 00:05:56.502 08:43:34 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.502 08:43:34 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.502 08:43:34 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58562' 00:05:56.502 08:43:34 alias_rpc -- common/autotest_common.sh@969 -- # kill 58562 00:05:56.502 08:43:34 alias_rpc -- common/autotest_common.sh@974 -- # wait 58562 00:05:58.403 ************************************ 00:05:58.403 END TEST alias_rpc 00:05:58.403 ************************************ 00:05:58.403 00:05:58.403 real 0m3.676s 00:05:58.403 user 0m3.904s 00:05:58.403 sys 0m0.479s 00:05:58.403 08:43:36 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.403 08:43:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.403 08:43:36 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:58.403 08:43:36 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:58.403 08:43:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.403 08:43:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.403 08:43:36 -- common/autotest_common.sh@10 -- # set +x 00:05:58.403 ************************************ 00:05:58.403 START TEST spdkcli_tcp 00:05:58.403 ************************************ 00:05:58.404 08:43:36 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:58.404 * Looking for test storage... 00:05:58.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:58.662 08:43:36 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:58.662 08:43:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:58.662 08:43:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:58.662 08:43:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.662 08:43:36 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:58.663 08:43:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:58.663 08:43:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.663 08:43:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:58.663 08:43:36 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.663 08:43:36 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:58.663 08:43:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:58.663 08:43:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.663 08:43:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:58.663 08:43:36 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.663 08:43:36 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.663 08:43:36 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.663 08:43:36 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:58.663 08:43:36 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.663 08:43:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:58.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.663 --rc genhtml_branch_coverage=1 00:05:58.663 --rc genhtml_function_coverage=1 00:05:58.663 --rc genhtml_legend=1 00:05:58.663 --rc geninfo_all_blocks=1 00:05:58.663 --rc geninfo_unexecuted_blocks=1 00:05:58.663 00:05:58.663 ' 00:05:58.663 08:43:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:58.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.663 --rc genhtml_branch_coverage=1 00:05:58.663 --rc genhtml_function_coverage=1 00:05:58.663 --rc genhtml_legend=1 00:05:58.663 --rc geninfo_all_blocks=1 00:05:58.663 --rc geninfo_unexecuted_blocks=1 00:05:58.663 00:05:58.663 ' 00:05:58.663 08:43:36 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:58.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.663 --rc genhtml_branch_coverage=1 00:05:58.663 --rc genhtml_function_coverage=1 00:05:58.663 --rc genhtml_legend=1 00:05:58.663 --rc geninfo_all_blocks=1 00:05:58.663 --rc geninfo_unexecuted_blocks=1 00:05:58.663 00:05:58.663 ' 00:05:58.663 08:43:36 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:58.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.663 --rc genhtml_branch_coverage=1 00:05:58.663 --rc genhtml_function_coverage=1 00:05:58.663 --rc genhtml_legend=1 00:05:58.663 --rc geninfo_all_blocks=1 00:05:58.663 --rc geninfo_unexecuted_blocks=1 00:05:58.663 00:05:58.663 ' 00:05:58.663 08:43:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:58.663 08:43:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:58.663 08:43:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:58.663 08:43:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:58.663 08:43:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:58.663 08:43:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:58.663 08:43:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:58.663 08:43:36 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:58.663 08:43:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.663 08:43:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58658 00:05:58.663 08:43:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58658 00:05:58.663 08:43:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:58.663 08:43:36 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58658 ']' 00:05:58.663 08:43:36 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.663 08:43:36 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.663 08:43:36 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.663 08:43:36 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.663 08:43:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.922 [2024-09-28 08:43:36.661583] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:58.922 [2024-09-28 08:43:36.662020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58658 ] 00:05:58.922 [2024-09-28 08:43:36.827628] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.180 [2024-09-28 08:43:36.994260] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.180 [2024-09-28 08:43:36.994277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.438 [2024-09-28 08:43:37.203035] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.006 08:43:37 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.006 08:43:37 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:00.006 08:43:37 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58675 00:06:00.006 08:43:37 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:00.006 08:43:37 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:00.266 [ 00:06:00.266 "bdev_malloc_delete", 00:06:00.266 "bdev_malloc_create", 00:06:00.266 "bdev_null_resize", 00:06:00.266 "bdev_null_delete", 00:06:00.266 "bdev_null_create", 00:06:00.266 "bdev_nvme_cuse_unregister", 00:06:00.266 "bdev_nvme_cuse_register", 00:06:00.266 "bdev_opal_new_user", 00:06:00.266 "bdev_opal_set_lock_state", 00:06:00.266 "bdev_opal_delete", 00:06:00.266 "bdev_opal_get_info", 00:06:00.266 "bdev_opal_create", 00:06:00.266 "bdev_nvme_opal_revert", 00:06:00.266 "bdev_nvme_opal_init", 00:06:00.266 "bdev_nvme_send_cmd", 00:06:00.266 "bdev_nvme_set_keys", 00:06:00.266 "bdev_nvme_get_path_iostat", 00:06:00.266 "bdev_nvme_get_mdns_discovery_info", 00:06:00.266 "bdev_nvme_stop_mdns_discovery", 00:06:00.266 "bdev_nvme_start_mdns_discovery", 00:06:00.266 "bdev_nvme_set_multipath_policy", 00:06:00.266 "bdev_nvme_set_preferred_path", 00:06:00.266 "bdev_nvme_get_io_paths", 00:06:00.266 "bdev_nvme_remove_error_injection", 00:06:00.266 "bdev_nvme_add_error_injection", 00:06:00.266 "bdev_nvme_get_discovery_info", 00:06:00.266 "bdev_nvme_stop_discovery", 00:06:00.266 "bdev_nvme_start_discovery", 00:06:00.266 "bdev_nvme_get_controller_health_info", 00:06:00.266 "bdev_nvme_disable_controller", 00:06:00.266 "bdev_nvme_enable_controller", 00:06:00.266 "bdev_nvme_reset_controller", 00:06:00.266 "bdev_nvme_get_transport_statistics", 00:06:00.266 "bdev_nvme_apply_firmware", 00:06:00.266 "bdev_nvme_detach_controller", 00:06:00.266 "bdev_nvme_get_controllers", 00:06:00.266 "bdev_nvme_attach_controller", 00:06:00.266 "bdev_nvme_set_hotplug", 00:06:00.266 "bdev_nvme_set_options", 00:06:00.266 "bdev_passthru_delete", 00:06:00.266 "bdev_passthru_create", 00:06:00.266 "bdev_lvol_set_parent_bdev", 00:06:00.266 "bdev_lvol_set_parent", 00:06:00.266 "bdev_lvol_check_shallow_copy", 00:06:00.266 "bdev_lvol_start_shallow_copy", 00:06:00.266 "bdev_lvol_grow_lvstore", 00:06:00.266 "bdev_lvol_get_lvols", 00:06:00.266 "bdev_lvol_get_lvstores", 00:06:00.266 "bdev_lvol_delete", 00:06:00.266 "bdev_lvol_set_read_only", 00:06:00.266 "bdev_lvol_resize", 00:06:00.266 "bdev_lvol_decouple_parent", 00:06:00.266 "bdev_lvol_inflate", 00:06:00.266 "bdev_lvol_rename", 00:06:00.266 "bdev_lvol_clone_bdev", 00:06:00.266 "bdev_lvol_clone", 00:06:00.266 "bdev_lvol_snapshot", 00:06:00.266 "bdev_lvol_create", 00:06:00.266 "bdev_lvol_delete_lvstore", 00:06:00.266 "bdev_lvol_rename_lvstore", 00:06:00.266 "bdev_lvol_create_lvstore", 00:06:00.266 "bdev_raid_set_options", 00:06:00.266 "bdev_raid_remove_base_bdev", 00:06:00.266 "bdev_raid_add_base_bdev", 00:06:00.266 "bdev_raid_delete", 00:06:00.266 "bdev_raid_create", 00:06:00.266 "bdev_raid_get_bdevs", 00:06:00.266 "bdev_error_inject_error", 00:06:00.266 "bdev_error_delete", 00:06:00.266 "bdev_error_create", 00:06:00.266 "bdev_split_delete", 00:06:00.266 "bdev_split_create", 00:06:00.266 "bdev_delay_delete", 00:06:00.266 "bdev_delay_create", 00:06:00.266 "bdev_delay_update_latency", 00:06:00.266 "bdev_zone_block_delete", 00:06:00.266 "bdev_zone_block_create", 00:06:00.266 "blobfs_create", 00:06:00.266 "blobfs_detect", 00:06:00.266 "blobfs_set_cache_size", 00:06:00.266 "bdev_aio_delete", 00:06:00.266 "bdev_aio_rescan", 00:06:00.266 "bdev_aio_create", 00:06:00.266 "bdev_ftl_set_property", 00:06:00.266 "bdev_ftl_get_properties", 00:06:00.266 "bdev_ftl_get_stats", 00:06:00.266 "bdev_ftl_unmap", 00:06:00.266 "bdev_ftl_unload", 00:06:00.266 "bdev_ftl_delete", 00:06:00.266 "bdev_ftl_load", 00:06:00.266 "bdev_ftl_create", 00:06:00.266 "bdev_virtio_attach_controller", 00:06:00.266 "bdev_virtio_scsi_get_devices", 00:06:00.266 "bdev_virtio_detach_controller", 00:06:00.266 "bdev_virtio_blk_set_hotplug", 00:06:00.266 "bdev_iscsi_delete", 00:06:00.266 "bdev_iscsi_create", 00:06:00.266 "bdev_iscsi_set_options", 00:06:00.266 "bdev_uring_delete", 00:06:00.266 "bdev_uring_rescan", 00:06:00.266 "bdev_uring_create", 00:06:00.266 "accel_error_inject_error", 00:06:00.266 "ioat_scan_accel_module", 00:06:00.266 "dsa_scan_accel_module", 00:06:00.266 "iaa_scan_accel_module", 00:06:00.266 "vfu_virtio_create_fs_endpoint", 00:06:00.266 "vfu_virtio_create_scsi_endpoint", 00:06:00.266 "vfu_virtio_scsi_remove_target", 00:06:00.266 "vfu_virtio_scsi_add_target", 00:06:00.266 "vfu_virtio_create_blk_endpoint", 00:06:00.266 "vfu_virtio_delete_endpoint", 00:06:00.266 "keyring_file_remove_key", 00:06:00.266 "keyring_file_add_key", 00:06:00.266 "keyring_linux_set_options", 00:06:00.266 "fsdev_aio_delete", 00:06:00.266 "fsdev_aio_create", 00:06:00.266 "iscsi_get_histogram", 00:06:00.266 "iscsi_enable_histogram", 00:06:00.266 "iscsi_set_options", 00:06:00.266 "iscsi_get_auth_groups", 00:06:00.266 "iscsi_auth_group_remove_secret", 00:06:00.266 "iscsi_auth_group_add_secret", 00:06:00.266 "iscsi_delete_auth_group", 00:06:00.266 "iscsi_create_auth_group", 00:06:00.266 "iscsi_set_discovery_auth", 00:06:00.266 "iscsi_get_options", 00:06:00.266 "iscsi_target_node_request_logout", 00:06:00.266 "iscsi_target_node_set_redirect", 00:06:00.266 "iscsi_target_node_set_auth", 00:06:00.266 "iscsi_target_node_add_lun", 00:06:00.266 "iscsi_get_stats", 00:06:00.266 "iscsi_get_connections", 00:06:00.266 "iscsi_portal_group_set_auth", 00:06:00.266 "iscsi_start_portal_group", 00:06:00.266 "iscsi_delete_portal_group", 00:06:00.266 "iscsi_create_portal_group", 00:06:00.266 "iscsi_get_portal_groups", 00:06:00.266 "iscsi_delete_target_node", 00:06:00.266 "iscsi_target_node_remove_pg_ig_maps", 00:06:00.266 "iscsi_target_node_add_pg_ig_maps", 00:06:00.266 "iscsi_create_target_node", 00:06:00.266 "iscsi_get_target_nodes", 00:06:00.266 "iscsi_delete_initiator_group", 00:06:00.266 "iscsi_initiator_group_remove_initiators", 00:06:00.266 "iscsi_initiator_group_add_initiators", 00:06:00.266 "iscsi_create_initiator_group", 00:06:00.266 "iscsi_get_initiator_groups", 00:06:00.266 "nvmf_set_crdt", 00:06:00.266 "nvmf_set_config", 00:06:00.266 "nvmf_set_max_subsystems", 00:06:00.266 "nvmf_stop_mdns_prr", 00:06:00.266 "nvmf_publish_mdns_prr", 00:06:00.266 "nvmf_subsystem_get_listeners", 00:06:00.266 "nvmf_subsystem_get_qpairs", 00:06:00.266 "nvmf_subsystem_get_controllers", 00:06:00.266 "nvmf_get_stats", 00:06:00.266 "nvmf_get_transports", 00:06:00.266 "nvmf_create_transport", 00:06:00.266 "nvmf_get_targets", 00:06:00.266 "nvmf_delete_target", 00:06:00.266 "nvmf_create_target", 00:06:00.266 "nvmf_subsystem_allow_any_host", 00:06:00.266 "nvmf_subsystem_set_keys", 00:06:00.266 "nvmf_subsystem_remove_host", 00:06:00.266 "nvmf_subsystem_add_host", 00:06:00.266 "nvmf_ns_remove_host", 00:06:00.266 "nvmf_ns_add_host", 00:06:00.266 "nvmf_subsystem_remove_ns", 00:06:00.266 "nvmf_subsystem_set_ns_ana_group", 00:06:00.266 "nvmf_subsystem_add_ns", 00:06:00.266 "nvmf_subsystem_listener_set_ana_state", 00:06:00.266 "nvmf_discovery_get_referrals", 00:06:00.266 "nvmf_discovery_remove_referral", 00:06:00.266 "nvmf_discovery_add_referral", 00:06:00.266 "nvmf_subsystem_remove_listener", 00:06:00.267 "nvmf_subsystem_add_listener", 00:06:00.267 "nvmf_delete_subsystem", 00:06:00.267 "nvmf_create_subsystem", 00:06:00.267 "nvmf_get_subsystems", 00:06:00.267 "env_dpdk_get_mem_stats", 00:06:00.267 "nbd_get_disks", 00:06:00.267 "nbd_stop_disk", 00:06:00.267 "nbd_start_disk", 00:06:00.267 "ublk_recover_disk", 00:06:00.267 "ublk_get_disks", 00:06:00.267 "ublk_stop_disk", 00:06:00.267 "ublk_start_disk", 00:06:00.267 "ublk_destroy_target", 00:06:00.267 "ublk_create_target", 00:06:00.267 "virtio_blk_create_transport", 00:06:00.267 "virtio_blk_get_transports", 00:06:00.267 "vhost_controller_set_coalescing", 00:06:00.267 "vhost_get_controllers", 00:06:00.267 "vhost_delete_controller", 00:06:00.267 "vhost_create_blk_controller", 00:06:00.267 "vhost_scsi_controller_remove_target", 00:06:00.267 "vhost_scsi_controller_add_target", 00:06:00.267 "vhost_start_scsi_controller", 00:06:00.267 "vhost_create_scsi_controller", 00:06:00.267 "thread_set_cpumask", 00:06:00.267 "scheduler_set_options", 00:06:00.267 "framework_get_governor", 00:06:00.267 "framework_get_scheduler", 00:06:00.267 "framework_set_scheduler", 00:06:00.267 "framework_get_reactors", 00:06:00.267 "thread_get_io_channels", 00:06:00.267 "thread_get_pollers", 00:06:00.267 "thread_get_stats", 00:06:00.267 "framework_monitor_context_switch", 00:06:00.267 "spdk_kill_instance", 00:06:00.267 "log_enable_timestamps", 00:06:00.267 "log_get_flags", 00:06:00.267 "log_clear_flag", 00:06:00.267 "log_set_flag", 00:06:00.267 "log_get_level", 00:06:00.267 "log_set_level", 00:06:00.267 "log_get_print_level", 00:06:00.267 "log_set_print_level", 00:06:00.267 "framework_enable_cpumask_locks", 00:06:00.267 "framework_disable_cpumask_locks", 00:06:00.267 "framework_wait_init", 00:06:00.267 "framework_start_init", 00:06:00.267 "scsi_get_devices", 00:06:00.267 "bdev_get_histogram", 00:06:00.267 "bdev_enable_histogram", 00:06:00.267 "bdev_set_qos_limit", 00:06:00.267 "bdev_set_qd_sampling_period", 00:06:00.267 "bdev_get_bdevs", 00:06:00.267 "bdev_reset_iostat", 00:06:00.267 "bdev_get_iostat", 00:06:00.267 "bdev_examine", 00:06:00.267 "bdev_wait_for_examine", 00:06:00.267 "bdev_set_options", 00:06:00.267 "accel_get_stats", 00:06:00.267 "accel_set_options", 00:06:00.267 "accel_set_driver", 00:06:00.267 "accel_crypto_key_destroy", 00:06:00.267 "accel_crypto_keys_get", 00:06:00.267 "accel_crypto_key_create", 00:06:00.267 "accel_assign_opc", 00:06:00.267 "accel_get_module_info", 00:06:00.267 "accel_get_opc_assignments", 00:06:00.267 "vmd_rescan", 00:06:00.267 "vmd_remove_device", 00:06:00.267 "vmd_enable", 00:06:00.267 "sock_get_default_impl", 00:06:00.267 "sock_set_default_impl", 00:06:00.267 "sock_impl_set_options", 00:06:00.267 "sock_impl_get_options", 00:06:00.267 "iobuf_get_stats", 00:06:00.267 "iobuf_set_options", 00:06:00.267 "keyring_get_keys", 00:06:00.267 "vfu_tgt_set_base_path", 00:06:00.267 "framework_get_pci_devices", 00:06:00.267 "framework_get_config", 00:06:00.267 "framework_get_subsystems", 00:06:00.267 "fsdev_set_opts", 00:06:00.267 "fsdev_get_opts", 00:06:00.267 "trace_get_info", 00:06:00.267 "trace_get_tpoint_group_mask", 00:06:00.267 "trace_disable_tpoint_group", 00:06:00.267 "trace_enable_tpoint_group", 00:06:00.267 "trace_clear_tpoint_mask", 00:06:00.267 "trace_set_tpoint_mask", 00:06:00.267 "notify_get_notifications", 00:06:00.267 "notify_get_types", 00:06:00.267 "spdk_get_version", 00:06:00.267 "rpc_get_methods" 00:06:00.267 ] 00:06:00.267 08:43:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:00.267 08:43:38 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:00.267 08:43:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.267 08:43:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:00.267 08:43:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58658 00:06:00.267 08:43:38 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58658 ']' 00:06:00.267 08:43:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58658 00:06:00.267 08:43:38 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:00.267 08:43:38 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.267 08:43:38 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58658 00:06:00.267 08:43:38 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.267 08:43:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.267 08:43:38 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58658' 00:06:00.267 killing process with pid 58658 00:06:00.267 08:43:38 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58658 00:06:00.267 08:43:38 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58658 00:06:02.802 ************************************ 00:06:02.802 END TEST spdkcli_tcp 00:06:02.802 ************************************ 00:06:02.802 00:06:02.802 real 0m3.891s 00:06:02.802 user 0m6.959s 00:06:02.802 sys 0m0.574s 00:06:02.802 08:43:40 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.802 08:43:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:02.802 08:43:40 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:02.802 08:43:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.802 08:43:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.802 08:43:40 -- common/autotest_common.sh@10 -- # set +x 00:06:02.802 ************************************ 00:06:02.802 START TEST dpdk_mem_utility 00:06:02.802 ************************************ 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:02.802 * Looking for test storage... 00:06:02.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.802 08:43:40 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:02.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.802 --rc genhtml_branch_coverage=1 00:06:02.802 --rc genhtml_function_coverage=1 00:06:02.802 --rc genhtml_legend=1 00:06:02.802 --rc geninfo_all_blocks=1 00:06:02.802 --rc geninfo_unexecuted_blocks=1 00:06:02.802 00:06:02.802 ' 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:02.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.802 --rc genhtml_branch_coverage=1 00:06:02.802 --rc genhtml_function_coverage=1 00:06:02.802 --rc genhtml_legend=1 00:06:02.802 --rc geninfo_all_blocks=1 00:06:02.802 --rc geninfo_unexecuted_blocks=1 00:06:02.802 00:06:02.802 ' 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:02.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.802 --rc genhtml_branch_coverage=1 00:06:02.802 --rc genhtml_function_coverage=1 00:06:02.802 --rc genhtml_legend=1 00:06:02.802 --rc geninfo_all_blocks=1 00:06:02.802 --rc geninfo_unexecuted_blocks=1 00:06:02.802 00:06:02.802 ' 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:02.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.802 --rc genhtml_branch_coverage=1 00:06:02.802 --rc genhtml_function_coverage=1 00:06:02.802 --rc genhtml_legend=1 00:06:02.802 --rc geninfo_all_blocks=1 00:06:02.802 --rc geninfo_unexecuted_blocks=1 00:06:02.802 00:06:02.802 ' 00:06:02.802 08:43:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:02.802 08:43:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58780 00:06:02.802 08:43:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58780 00:06:02.802 08:43:40 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58780 ']' 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.802 08:43:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:02.802 [2024-09-28 08:43:40.588254] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:02.802 [2024-09-28 08:43:40.589140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58780 ] 00:06:02.802 [2024-09-28 08:43:40.761044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.062 [2024-09-28 08:43:40.921796] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.321 [2024-09-28 08:43:41.125487] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.891 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.891 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:03.891 08:43:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:03.891 08:43:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:03.891 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:03.891 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.891 { 00:06:03.891 "filename": "/tmp/spdk_mem_dump.txt" 00:06:03.891 } 00:06:03.891 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.891 08:43:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:03.891 DPDK memory size 866.000000 MiB in 1 heap(s) 00:06:03.891 1 heaps totaling size 866.000000 MiB 00:06:03.891 size: 866.000000 MiB heap id: 0 00:06:03.891 end heaps---------- 00:06:03.891 9 mempools totaling size 642.649841 MiB 00:06:03.891 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:03.892 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:03.892 size: 92.545471 MiB name: bdev_io_58780 00:06:03.892 size: 51.011292 MiB name: evtpool_58780 00:06:03.892 size: 50.003479 MiB name: msgpool_58780 00:06:03.892 size: 36.509338 MiB name: fsdev_io_58780 00:06:03.892 size: 21.763794 MiB name: PDU_Pool 00:06:03.892 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:03.892 size: 0.026123 MiB name: Session_Pool 00:06:03.892 end mempools------- 00:06:03.892 6 memzones totaling size 4.142822 MiB 00:06:03.892 size: 1.000366 MiB name: RG_ring_0_58780 00:06:03.892 size: 1.000366 MiB name: RG_ring_1_58780 00:06:03.892 size: 1.000366 MiB name: RG_ring_4_58780 00:06:03.892 size: 1.000366 MiB name: RG_ring_5_58780 00:06:03.892 size: 0.125366 MiB name: RG_ring_2_58780 00:06:03.892 size: 0.015991 MiB name: RG_ring_3_58780 00:06:03.892 end memzones------- 00:06:03.892 08:43:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:03.892 heap id: 0 total size: 866.000000 MiB number of busy elements: 310 number of free elements: 19 00:06:03.892 list of free elements. size: 19.914795 MiB 00:06:03.892 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:03.892 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:03.892 element at address: 0x200009600000 with size: 1.995972 MiB 00:06:03.892 element at address: 0x20000d800000 with size: 1.995972 MiB 00:06:03.892 element at address: 0x200007000000 with size: 1.991028 MiB 00:06:03.892 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:06:03.892 element at address: 0x20001c300040 with size: 0.999939 MiB 00:06:03.892 element at address: 0x20001c400000 with size: 0.999084 MiB 00:06:03.892 element at address: 0x200035000000 with size: 0.994324 MiB 00:06:03.892 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:06:03.892 element at address: 0x20001c700040 with size: 0.936401 MiB 00:06:03.892 element at address: 0x200000200000 with size: 0.832153 MiB 00:06:03.892 element at address: 0x20001de00000 with size: 0.560974 MiB 00:06:03.892 element at address: 0x200003e00000 with size: 0.490662 MiB 00:06:03.892 element at address: 0x20001c000000 with size: 0.488953 MiB 00:06:03.892 element at address: 0x20001c800000 with size: 0.485413 MiB 00:06:03.892 element at address: 0x200015e00000 with size: 0.443481 MiB 00:06:03.892 element at address: 0x20002b200000 with size: 0.391663 MiB 00:06:03.892 element at address: 0x200003a00000 with size: 0.352844 MiB 00:06:03.892 list of standard malloc elements. size: 199.286499 MiB 00:06:03.892 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:06:03.892 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:06:03.892 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:06:03.892 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:06:03.892 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:06:03.892 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:03.892 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:06:03.892 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:03.892 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:06:03.892 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:06:03.892 element at address: 0x200015dff040 with size: 0.000305 MiB 00:06:03.892 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:03.892 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003aff700 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:06:03.892 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:03.892 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:06:03.892 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:06:03.892 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:06:03.892 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:06:03.892 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:06:03.892 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:06:03.892 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015dff180 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015dff280 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015dff380 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015dff480 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015dff580 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015dff680 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015dff780 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015dff880 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015dff980 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015e71880 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015e71980 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015e72080 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015e72180 with size: 0.000244 MiB 00:06:03.893 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de8f9c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de8fac0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de8fbc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de8fcc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de8fdc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:06:03.893 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b264440 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b264540 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26b200 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:06:03.894 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:06:03.894 list of memzone associated elements. size: 646.798706 MiB 00:06:03.894 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:06:03.894 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:03.894 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:06:03.894 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:03.894 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:06:03.894 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58780_0 00:06:03.894 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:03.894 associated memzone info: size: 48.002930 MiB name: MP_evtpool_58780_0 00:06:03.894 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:03.894 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58780_0 00:06:03.894 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:06:03.894 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58780_0 00:06:03.894 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:06:03.894 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:03.894 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:06:03.894 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:03.894 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:03.894 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_58780 00:06:03.894 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:03.894 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58780 00:06:03.894 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:03.894 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58780 00:06:03.894 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:06:03.894 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:03.894 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:06:03.894 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:03.894 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:06:03.894 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:03.894 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:06:03.894 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:03.894 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:03.894 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58780 00:06:03.895 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:03.895 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58780 00:06:03.895 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:06:03.895 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58780 00:06:03.895 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:06:03.895 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58780 00:06:03.895 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:06:03.895 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58780 00:06:03.895 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:06:03.895 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58780 00:06:03.895 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:06:03.895 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:03.895 element at address: 0x200015e72280 with size: 0.500549 MiB 00:06:03.895 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:03.895 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:06:03.895 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:03.895 element at address: 0x200003a5e780 with size: 0.125549 MiB 00:06:03.895 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58780 00:06:03.895 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:06:03.895 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:03.895 element at address: 0x20002b264640 with size: 0.023804 MiB 00:06:03.895 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:03.895 element at address: 0x200003a5a540 with size: 0.016174 MiB 00:06:03.895 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58780 00:06:03.895 element at address: 0x20002b26a7c0 with size: 0.002502 MiB 00:06:03.895 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:03.895 element at address: 0x2000002d6180 with size: 0.000366 MiB 00:06:03.895 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58780 00:06:03.895 element at address: 0x200003aff800 with size: 0.000366 MiB 00:06:03.895 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58780 00:06:03.895 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:06:03.895 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58780 00:06:03.895 element at address: 0x20002b26b300 with size: 0.000366 MiB 00:06:03.895 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:03.895 08:43:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:03.895 08:43:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58780 00:06:03.895 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58780 ']' 00:06:03.895 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58780 00:06:03.895 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:03.895 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.895 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58780 00:06:03.895 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.895 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.895 killing process with pid 58780 00:06:03.895 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58780' 00:06:03.895 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58780 00:06:03.895 08:43:41 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58780 00:06:05.799 00:06:05.799 real 0m3.375s 00:06:05.799 user 0m3.499s 00:06:05.799 sys 0m0.507s 00:06:05.799 08:43:43 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.799 08:43:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:05.799 ************************************ 00:06:05.799 END TEST dpdk_mem_utility 00:06:05.799 ************************************ 00:06:05.799 08:43:43 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:05.799 08:43:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.799 08:43:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.799 08:43:43 -- common/autotest_common.sh@10 -- # set +x 00:06:05.799 ************************************ 00:06:05.799 START TEST event 00:06:05.799 ************************************ 00:06:05.799 08:43:43 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:05.799 * Looking for test storage... 00:06:05.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:05.799 08:43:43 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:05.799 08:43:43 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:05.799 08:43:43 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:06.058 08:43:43 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:06.058 08:43:43 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.058 08:43:43 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.058 08:43:43 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.058 08:43:43 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.058 08:43:43 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.058 08:43:43 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.058 08:43:43 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.058 08:43:43 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.058 08:43:43 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.058 08:43:43 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.058 08:43:43 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.058 08:43:43 event -- scripts/common.sh@344 -- # case "$op" in 00:06:06.058 08:43:43 event -- scripts/common.sh@345 -- # : 1 00:06:06.058 08:43:43 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.058 08:43:43 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.058 08:43:43 event -- scripts/common.sh@365 -- # decimal 1 00:06:06.058 08:43:43 event -- scripts/common.sh@353 -- # local d=1 00:06:06.058 08:43:43 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.058 08:43:43 event -- scripts/common.sh@355 -- # echo 1 00:06:06.058 08:43:43 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.058 08:43:43 event -- scripts/common.sh@366 -- # decimal 2 00:06:06.058 08:43:43 event -- scripts/common.sh@353 -- # local d=2 00:06:06.058 08:43:43 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.058 08:43:43 event -- scripts/common.sh@355 -- # echo 2 00:06:06.058 08:43:43 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.058 08:43:43 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.058 08:43:43 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.058 08:43:43 event -- scripts/common.sh@368 -- # return 0 00:06:06.058 08:43:43 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.059 08:43:43 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:06.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.059 --rc genhtml_branch_coverage=1 00:06:06.059 --rc genhtml_function_coverage=1 00:06:06.059 --rc genhtml_legend=1 00:06:06.059 --rc geninfo_all_blocks=1 00:06:06.059 --rc geninfo_unexecuted_blocks=1 00:06:06.059 00:06:06.059 ' 00:06:06.059 08:43:43 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:06.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.059 --rc genhtml_branch_coverage=1 00:06:06.059 --rc genhtml_function_coverage=1 00:06:06.059 --rc genhtml_legend=1 00:06:06.059 --rc geninfo_all_blocks=1 00:06:06.059 --rc geninfo_unexecuted_blocks=1 00:06:06.059 00:06:06.059 ' 00:06:06.059 08:43:43 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:06.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.059 --rc genhtml_branch_coverage=1 00:06:06.059 --rc genhtml_function_coverage=1 00:06:06.059 --rc genhtml_legend=1 00:06:06.059 --rc geninfo_all_blocks=1 00:06:06.059 --rc geninfo_unexecuted_blocks=1 00:06:06.059 00:06:06.059 ' 00:06:06.059 08:43:43 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:06.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.059 --rc genhtml_branch_coverage=1 00:06:06.059 --rc genhtml_function_coverage=1 00:06:06.059 --rc genhtml_legend=1 00:06:06.059 --rc geninfo_all_blocks=1 00:06:06.059 --rc geninfo_unexecuted_blocks=1 00:06:06.059 00:06:06.059 ' 00:06:06.059 08:43:43 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:06.059 08:43:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:06.059 08:43:43 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.059 08:43:43 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:06.059 08:43:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.059 08:43:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.059 ************************************ 00:06:06.059 START TEST event_perf 00:06:06.059 ************************************ 00:06:06.059 08:43:43 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:06.059 Running I/O for 1 seconds...[2024-09-28 08:43:43.910354] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:06.059 [2024-09-28 08:43:43.910503] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58877 ] 00:06:06.318 [2024-09-28 08:43:44.067008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:06.318 [2024-09-28 08:43:44.232989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.318 [2024-09-28 08:43:44.233173] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.318 [2024-09-28 08:43:44.233243] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:06.318 [2024-09-28 08:43:44.233227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.693 Running I/O for 1 seconds... 00:06:07.693 lcore 0: 192219 00:06:07.693 lcore 1: 192221 00:06:07.693 lcore 2: 192221 00:06:07.693 lcore 3: 192218 00:06:07.693 done. 00:06:07.693 00:06:07.693 real 0m1.671s 00:06:07.693 user 0m4.447s 00:06:07.693 sys 0m0.101s 00:06:07.693 08:43:45 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.693 08:43:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:07.693 ************************************ 00:06:07.693 END TEST event_perf 00:06:07.693 ************************************ 00:06:07.693 08:43:45 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:07.693 08:43:45 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:07.693 08:43:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.693 08:43:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.693 ************************************ 00:06:07.693 START TEST event_reactor 00:06:07.693 ************************************ 00:06:07.693 08:43:45 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:07.693 [2024-09-28 08:43:45.646957] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:07.693 [2024-09-28 08:43:45.647126] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58922 ] 00:06:07.951 [2024-09-28 08:43:45.816034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.210 [2024-09-28 08:43:45.970876] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.585 test_start 00:06:09.585 oneshot 00:06:09.585 tick 100 00:06:09.585 tick 100 00:06:09.585 tick 250 00:06:09.585 tick 100 00:06:09.585 tick 100 00:06:09.585 tick 100 00:06:09.585 tick 250 00:06:09.585 tick 500 00:06:09.585 tick 100 00:06:09.585 tick 100 00:06:09.585 tick 250 00:06:09.585 tick 100 00:06:09.585 tick 100 00:06:09.585 test_end 00:06:09.585 00:06:09.585 real 0m1.698s 00:06:09.585 user 0m1.498s 00:06:09.585 sys 0m0.091s 00:06:09.585 08:43:47 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.585 08:43:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:09.585 ************************************ 00:06:09.585 END TEST event_reactor 00:06:09.585 ************************************ 00:06:09.585 08:43:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.585 08:43:47 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:09.585 08:43:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.585 08:43:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.585 ************************************ 00:06:09.585 START TEST event_reactor_perf 00:06:09.585 ************************************ 00:06:09.585 08:43:47 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:09.585 [2024-09-28 08:43:47.394650] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:09.585 [2024-09-28 08:43:47.394879] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58964 ] 00:06:09.585 [2024-09-28 08:43:47.559038] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.845 [2024-09-28 08:43:47.707335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.222 test_start 00:06:11.222 test_end 00:06:11.222 Performance: 337084 events per second 00:06:11.222 00:06:11.222 real 0m1.674s 00:06:11.222 user 0m1.471s 00:06:11.222 sys 0m0.094s 00:06:11.222 08:43:49 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.222 08:43:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.222 ************************************ 00:06:11.222 END TEST event_reactor_perf 00:06:11.222 ************************************ 00:06:11.222 08:43:49 event -- event/event.sh@49 -- # uname -s 00:06:11.222 08:43:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:11.222 08:43:49 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:11.222 08:43:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.222 08:43:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.222 08:43:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.222 ************************************ 00:06:11.222 START TEST event_scheduler 00:06:11.222 ************************************ 00:06:11.222 08:43:49 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:11.222 * Looking for test storage... 00:06:11.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:11.222 08:43:49 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:11.222 08:43:49 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:11.222 08:43:49 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:11.483 08:43:49 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:11.483 08:43:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.484 08:43:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:11.484 08:43:49 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.484 08:43:49 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.484 08:43:49 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.484 08:43:49 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:11.484 08:43:49 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.484 08:43:49 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:11.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.484 --rc genhtml_branch_coverage=1 00:06:11.484 --rc genhtml_function_coverage=1 00:06:11.484 --rc genhtml_legend=1 00:06:11.484 --rc geninfo_all_blocks=1 00:06:11.484 --rc geninfo_unexecuted_blocks=1 00:06:11.484 00:06:11.484 ' 00:06:11.484 08:43:49 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:11.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.484 --rc genhtml_branch_coverage=1 00:06:11.484 --rc genhtml_function_coverage=1 00:06:11.484 --rc genhtml_legend=1 00:06:11.484 --rc geninfo_all_blocks=1 00:06:11.484 --rc geninfo_unexecuted_blocks=1 00:06:11.484 00:06:11.484 ' 00:06:11.484 08:43:49 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:11.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.484 --rc genhtml_branch_coverage=1 00:06:11.484 --rc genhtml_function_coverage=1 00:06:11.484 --rc genhtml_legend=1 00:06:11.484 --rc geninfo_all_blocks=1 00:06:11.484 --rc geninfo_unexecuted_blocks=1 00:06:11.484 00:06:11.484 ' 00:06:11.484 08:43:49 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:11.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.484 --rc genhtml_branch_coverage=1 00:06:11.484 --rc genhtml_function_coverage=1 00:06:11.484 --rc genhtml_legend=1 00:06:11.484 --rc geninfo_all_blocks=1 00:06:11.484 --rc geninfo_unexecuted_blocks=1 00:06:11.484 00:06:11.484 ' 00:06:11.484 08:43:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:11.484 08:43:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59035 00:06:11.484 08:43:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.484 08:43:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59035 00:06:11.484 08:43:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:11.484 08:43:49 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 59035 ']' 00:06:11.484 08:43:49 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.484 08:43:49 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.484 08:43:49 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.484 08:43:49 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.484 08:43:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:11.484 [2024-09-28 08:43:49.375611] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:11.484 [2024-09-28 08:43:49.375817] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59035 ] 00:06:11.750 [2024-09-28 08:43:49.551618] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.009 [2024-09-28 08:43:49.764930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.009 [2024-09-28 08:43:49.765070] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.009 [2024-09-28 08:43:49.765187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.009 [2024-09-28 08:43:49.765192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.576 08:43:50 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.576 08:43:50 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:12.576 08:43:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:12.576 08:43:50 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.576 08:43:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.576 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:12.577 POWER: Cannot set governor of lcore 0 to userspace 00:06:12.577 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:12.577 POWER: Cannot set governor of lcore 0 to performance 00:06:12.577 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:12.577 POWER: Cannot set governor of lcore 0 to userspace 00:06:12.577 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:12.577 POWER: Cannot set governor of lcore 0 to userspace 00:06:12.577 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:12.577 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:12.577 POWER: Unable to set Power Management Environment for lcore 0 00:06:12.577 [2024-09-28 08:43:50.363457] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:12.577 [2024-09-28 08:43:50.363481] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:12.577 [2024-09-28 08:43:50.363494] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:12.577 [2024-09-28 08:43:50.363516] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:12.577 [2024-09-28 08:43:50.363527] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:12.577 [2024-09-28 08:43:50.363538] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:12.577 08:43:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.577 08:43:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:12.577 08:43:50 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.577 08:43:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.577 [2024-09-28 08:43:50.527598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.836 [2024-09-28 08:43:50.613374] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:12.836 08:43:50 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.836 08:43:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:12.836 08:43:50 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.836 08:43:50 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.836 08:43:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:12.836 ************************************ 00:06:12.836 START TEST scheduler_create_thread 00:06:12.836 ************************************ 00:06:12.836 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:12.836 08:43:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:12.836 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.836 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.836 2 00:06:12.836 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.836 08:43:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:12.836 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.836 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.836 3 00:06:12.836 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.836 08:43:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:12.836 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.836 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.836 4 00:06:12.836 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.837 5 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.837 6 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.837 7 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.837 8 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.837 9 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.837 10 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.837 08:43:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.405 08:43:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.405 08:43:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:13.405 08:43:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:13.405 08:43:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.405 08:43:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.341 08:43:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.341 08:43:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:14.342 08:43:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.342 08:43:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.278 08:43:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.278 08:43:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:15.278 08:43:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:15.278 08:43:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.278 08:43:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.215 08:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.215 00:06:16.215 real 0m3.223s 00:06:16.215 user 0m0.021s 00:06:16.215 sys 0m0.006s 00:06:16.215 08:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.215 ************************************ 00:06:16.215 END TEST scheduler_create_thread 00:06:16.215 08:43:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.215 ************************************ 00:06:16.215 08:43:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:16.215 08:43:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59035 00:06:16.215 08:43:53 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 59035 ']' 00:06:16.215 08:43:53 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 59035 00:06:16.215 08:43:53 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:16.215 08:43:53 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.215 08:43:53 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59035 00:06:16.215 08:43:53 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:16.215 08:43:53 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:16.215 08:43:53 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59035' 00:06:16.215 killing process with pid 59035 00:06:16.215 08:43:53 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 59035 00:06:16.215 08:43:53 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 59035 00:06:16.473 [2024-09-28 08:43:54.228503] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:17.411 00:06:17.411 real 0m6.214s 00:06:17.411 user 0m12.122s 00:06:17.411 sys 0m0.427s 00:06:17.411 08:43:55 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.411 08:43:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.411 ************************************ 00:06:17.411 END TEST event_scheduler 00:06:17.411 ************************************ 00:06:17.411 08:43:55 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:17.411 08:43:55 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:17.411 08:43:55 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.411 08:43:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.411 08:43:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.411 ************************************ 00:06:17.411 START TEST app_repeat 00:06:17.411 ************************************ 00:06:17.411 08:43:55 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:17.411 08:43:55 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.411 08:43:55 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.411 08:43:55 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:17.411 08:43:55 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.411 08:43:55 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:17.411 08:43:55 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:17.411 08:43:55 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:17.411 08:43:55 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59152 00:06:17.411 08:43:55 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.411 08:43:55 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:17.411 Process app_repeat pid: 59152 00:06:17.411 08:43:55 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59152' 00:06:17.411 08:43:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.411 spdk_app_start Round 0 00:06:17.411 08:43:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:17.411 08:43:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59152 /var/tmp/spdk-nbd.sock 00:06:17.411 08:43:55 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59152 ']' 00:06:17.411 08:43:55 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.411 08:43:55 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.411 08:43:55 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.411 08:43:55 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.411 08:43:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.670 [2024-09-28 08:43:55.422650] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:17.670 [2024-09-28 08:43:55.422859] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59152 ] 00:06:17.670 [2024-09-28 08:43:55.593137] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.929 [2024-09-28 08:43:55.754729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.929 [2024-09-28 08:43:55.754743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.929 [2024-09-28 08:43:55.913721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:18.495 08:43:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.495 08:43:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:18.495 08:43:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.753 Malloc0 00:06:18.754 08:43:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.012 Malloc1 00:06:19.012 08:43:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.012 08:43:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.270 /dev/nbd0 00:06:19.270 08:43:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.270 08:43:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.270 08:43:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:19.270 08:43:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:19.270 08:43:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.270 08:43:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.270 08:43:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:19.270 08:43:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:19.270 08:43:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.270 08:43:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.270 08:43:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.270 1+0 records in 00:06:19.270 1+0 records out 00:06:19.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304574 s, 13.4 MB/s 00:06:19.270 08:43:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.270 08:43:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:19.270 08:43:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.270 08:43:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.270 08:43:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:19.270 08:43:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.270 08:43:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.270 08:43:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.529 /dev/nbd1 00:06:19.529 08:43:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.529 08:43:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.529 08:43:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:19.529 08:43:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:19.529 08:43:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:19.529 08:43:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:19.529 08:43:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:19.529 08:43:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:19.529 08:43:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:19.529 08:43:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:19.529 08:43:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.529 1+0 records in 00:06:19.529 1+0 records out 00:06:19.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033111 s, 12.4 MB/s 00:06:19.529 08:43:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.529 08:43:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:19.529 08:43:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.529 08:43:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:19.529 08:43:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:19.529 08:43:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.529 08:43:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.529 08:43:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.529 08:43:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.529 08:43:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.786 08:43:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.786 { 00:06:19.786 "nbd_device": "/dev/nbd0", 00:06:19.786 "bdev_name": "Malloc0" 00:06:19.786 }, 00:06:19.786 { 00:06:19.786 "nbd_device": "/dev/nbd1", 00:06:19.786 "bdev_name": "Malloc1" 00:06:19.786 } 00:06:19.786 ]' 00:06:19.786 08:43:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.786 { 00:06:19.786 "nbd_device": "/dev/nbd0", 00:06:19.786 "bdev_name": "Malloc0" 00:06:19.786 }, 00:06:19.786 { 00:06:19.786 "nbd_device": "/dev/nbd1", 00:06:19.786 "bdev_name": "Malloc1" 00:06:19.786 } 00:06:19.786 ]' 00:06:19.786 08:43:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.786 08:43:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.786 /dev/nbd1' 00:06:19.786 08:43:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.786 /dev/nbd1' 00:06:19.786 08:43:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.786 08:43:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.787 08:43:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.787 08:43:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.787 08:43:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.787 08:43:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.787 08:43:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.787 08:43:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.787 08:43:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.787 08:43:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.787 08:43:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.787 08:43:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.787 256+0 records in 00:06:19.787 256+0 records out 00:06:19.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00773389 s, 136 MB/s 00:06:20.044 08:43:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.044 08:43:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.044 256+0 records in 00:06:20.044 256+0 records out 00:06:20.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0332656 s, 31.5 MB/s 00:06:20.044 08:43:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.044 08:43:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.044 256+0 records in 00:06:20.044 256+0 records out 00:06:20.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314726 s, 33.3 MB/s 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.045 08:43:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.303 08:43:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.303 08:43:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.303 08:43:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.303 08:43:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.303 08:43:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.303 08:43:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.303 08:43:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.303 08:43:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.303 08:43:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.304 08:43:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.563 08:43:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.563 08:43:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.563 08:43:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.563 08:43:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.563 08:43:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.563 08:43:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.563 08:43:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.563 08:43:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.563 08:43:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.563 08:43:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.563 08:43:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.822 08:43:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.822 08:43:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.822 08:43:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.822 08:43:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.822 08:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.822 08:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.822 08:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.822 08:43:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.822 08:43:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.822 08:43:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.822 08:43:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.822 08:43:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.822 08:43:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.389 08:43:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.324 [2024-09-28 08:44:00.204229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.582 [2024-09-28 08:44:00.359628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.582 [2024-09-28 08:44:00.359636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.582 [2024-09-28 08:44:00.499852] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.582 [2024-09-28 08:44:00.499956] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.582 [2024-09-28 08:44:00.499978] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.482 08:44:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.482 08:44:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:24.483 spdk_app_start Round 1 00:06:24.483 08:44:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59152 /var/tmp/spdk-nbd.sock 00:06:24.483 08:44:02 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59152 ']' 00:06:24.483 08:44:02 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.483 08:44:02 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.483 08:44:02 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.483 08:44:02 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.483 08:44:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.741 08:44:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.741 08:44:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:24.741 08:44:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.000 Malloc0 00:06:25.000 08:44:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.258 Malloc1 00:06:25.259 08:44:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.259 08:44:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:25.517 /dev/nbd0 00:06:25.517 08:44:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.517 08:44:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.517 08:44:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:25.517 08:44:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:25.517 08:44:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:25.517 08:44:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:25.517 08:44:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:25.517 08:44:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:25.517 08:44:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:25.517 08:44:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:25.517 08:44:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.517 1+0 records in 00:06:25.517 1+0 records out 00:06:25.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252584 s, 16.2 MB/s 00:06:25.517 08:44:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.517 08:44:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:25.517 08:44:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:25.776 08:44:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:25.776 08:44:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:25.776 08:44:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.776 08:44:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.776 08:44:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.034 /dev/nbd1 00:06:26.034 08:44:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.034 08:44:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.034 08:44:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:26.034 08:44:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:26.034 08:44:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.034 08:44:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.034 08:44:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:26.034 08:44:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:26.034 08:44:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.034 08:44:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.034 08:44:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.034 1+0 records in 00:06:26.034 1+0 records out 00:06:26.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030772 s, 13.3 MB/s 00:06:26.034 08:44:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.034 08:44:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:26.034 08:44:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:26.034 08:44:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.034 08:44:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:26.034 08:44:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.034 08:44:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.034 08:44:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.034 08:44:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.034 08:44:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.292 { 00:06:26.292 "nbd_device": "/dev/nbd0", 00:06:26.292 "bdev_name": "Malloc0" 00:06:26.292 }, 00:06:26.292 { 00:06:26.292 "nbd_device": "/dev/nbd1", 00:06:26.292 "bdev_name": "Malloc1" 00:06:26.292 } 00:06:26.292 ]' 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.292 { 00:06:26.292 "nbd_device": "/dev/nbd0", 00:06:26.292 "bdev_name": "Malloc0" 00:06:26.292 }, 00:06:26.292 { 00:06:26.292 "nbd_device": "/dev/nbd1", 00:06:26.292 "bdev_name": "Malloc1" 00:06:26.292 } 00:06:26.292 ]' 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.292 /dev/nbd1' 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.292 /dev/nbd1' 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.292 256+0 records in 00:06:26.292 256+0 records out 00:06:26.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00670902 s, 156 MB/s 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.292 256+0 records in 00:06:26.292 256+0 records out 00:06:26.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244585 s, 42.9 MB/s 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.292 256+0 records in 00:06:26.292 256+0 records out 00:06:26.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307509 s, 34.1 MB/s 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.292 08:44:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.551 08:44:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.551 08:44:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.551 08:44:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.551 08:44:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.551 08:44:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.551 08:44:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.551 08:44:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.551 08:44:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.551 08:44:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.551 08:44:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.118 08:44:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.118 08:44:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.118 08:44:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.118 08:44:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.118 08:44:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.118 08:44:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.118 08:44:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.118 08:44:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.118 08:44:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.118 08:44:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.118 08:44:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.377 08:44:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.377 08:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.377 08:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.377 08:44:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.377 08:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.377 08:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.377 08:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.377 08:44:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.377 08:44:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.377 08:44:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.377 08:44:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.377 08:44:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.377 08:44:05 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.945 08:44:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.921 [2024-09-28 08:44:06.635895] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.921 [2024-09-28 08:44:06.788424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.921 [2024-09-28 08:44:06.788426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.180 [2024-09-28 08:44:06.930598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.180 [2024-09-28 08:44:06.930744] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.180 [2024-09-28 08:44:06.930765] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.085 spdk_app_start Round 2 00:06:31.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.085 08:44:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.085 08:44:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:31.085 08:44:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59152 /var/tmp/spdk-nbd.sock 00:06:31.085 08:44:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59152 ']' 00:06:31.085 08:44:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.085 08:44:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.085 08:44:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.085 08:44:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.085 08:44:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.085 08:44:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.085 08:44:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:31.085 08:44:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.344 Malloc0 00:06:31.344 08:44:09 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.603 Malloc1 00:06:31.603 08:44:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.603 08:44:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.862 /dev/nbd0 00:06:31.862 08:44:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.862 08:44:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.862 08:44:09 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:31.862 08:44:09 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:31.862 08:44:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.862 08:44:09 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.862 08:44:09 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:31.862 08:44:09 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:31.862 08:44:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.862 08:44:09 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.862 08:44:09 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.862 1+0 records in 00:06:31.862 1+0 records out 00:06:31.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301797 s, 13.6 MB/s 00:06:31.862 08:44:09 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.862 08:44:09 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:31.862 08:44:09 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.862 08:44:09 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.862 08:44:09 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:31.862 08:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.862 08:44:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.862 08:44:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.122 /dev/nbd1 00:06:32.381 08:44:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.381 08:44:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.381 08:44:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:32.381 08:44:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:32.381 08:44:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.381 08:44:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.381 08:44:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:32.381 08:44:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:32.381 08:44:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.381 08:44:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.381 08:44:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.381 1+0 records in 00:06:32.381 1+0 records out 00:06:32.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029246 s, 14.0 MB/s 00:06:32.381 08:44:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.381 08:44:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:32.381 08:44:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.381 08:44:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.381 08:44:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:32.381 08:44:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.381 08:44:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.381 08:44:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.381 08:44:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.381 08:44:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.640 { 00:06:32.640 "nbd_device": "/dev/nbd0", 00:06:32.640 "bdev_name": "Malloc0" 00:06:32.640 }, 00:06:32.640 { 00:06:32.640 "nbd_device": "/dev/nbd1", 00:06:32.640 "bdev_name": "Malloc1" 00:06:32.640 } 00:06:32.640 ]' 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.640 { 00:06:32.640 "nbd_device": "/dev/nbd0", 00:06:32.640 "bdev_name": "Malloc0" 00:06:32.640 }, 00:06:32.640 { 00:06:32.640 "nbd_device": "/dev/nbd1", 00:06:32.640 "bdev_name": "Malloc1" 00:06:32.640 } 00:06:32.640 ]' 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.640 /dev/nbd1' 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.640 /dev/nbd1' 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.640 256+0 records in 00:06:32.640 256+0 records out 00:06:32.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00767519 s, 137 MB/s 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.640 256+0 records in 00:06:32.640 256+0 records out 00:06:32.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281903 s, 37.2 MB/s 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.640 256+0 records in 00:06:32.640 256+0 records out 00:06:32.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308195 s, 34.0 MB/s 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.640 08:44:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.641 08:44:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.900 08:44:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.900 08:44:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.900 08:44:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.900 08:44:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.900 08:44:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.900 08:44:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.900 08:44:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.900 08:44:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.900 08:44:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.900 08:44:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.159 08:44:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.160 08:44:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.160 08:44:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.160 08:44:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.160 08:44:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.160 08:44:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.160 08:44:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.160 08:44:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.160 08:44:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.160 08:44:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.160 08:44:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.727 08:44:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.727 08:44:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.727 08:44:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.727 08:44:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.727 08:44:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.727 08:44:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.727 08:44:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.727 08:44:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.727 08:44:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.727 08:44:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.727 08:44:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.727 08:44:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.727 08:44:11 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.987 08:44:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.924 [2024-09-28 08:44:12.881989] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.184 [2024-09-28 08:44:13.024110] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.184 [2024-09-28 08:44:13.024133] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.184 [2024-09-28 08:44:13.167822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:35.184 [2024-09-28 08:44:13.167968] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.184 [2024-09-28 08:44:13.167997] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.088 08:44:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59152 /var/tmp/spdk-nbd.sock 00:06:37.088 08:44:14 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 59152 ']' 00:06:37.088 08:44:14 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.088 08:44:14 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.088 08:44:14 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.088 08:44:14 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.088 08:44:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.348 08:44:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.348 08:44:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:37.348 08:44:15 event.app_repeat -- event/event.sh@39 -- # killprocess 59152 00:06:37.348 08:44:15 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 59152 ']' 00:06:37.348 08:44:15 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 59152 00:06:37.348 08:44:15 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:37.348 08:44:15 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.348 08:44:15 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59152 00:06:37.348 killing process with pid 59152 00:06:37.348 08:44:15 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.348 08:44:15 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.348 08:44:15 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59152' 00:06:37.348 08:44:15 event.app_repeat -- common/autotest_common.sh@969 -- # kill 59152 00:06:37.348 08:44:15 event.app_repeat -- common/autotest_common.sh@974 -- # wait 59152 00:06:38.285 spdk_app_start is called in Round 0. 00:06:38.285 Shutdown signal received, stop current app iteration 00:06:38.285 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:06:38.285 spdk_app_start is called in Round 1. 00:06:38.285 Shutdown signal received, stop current app iteration 00:06:38.285 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:06:38.285 spdk_app_start is called in Round 2. 00:06:38.285 Shutdown signal received, stop current app iteration 00:06:38.285 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:06:38.285 spdk_app_start is called in Round 3. 00:06:38.285 Shutdown signal received, stop current app iteration 00:06:38.285 08:44:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:38.285 08:44:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:38.285 00:06:38.285 real 0m20.833s 00:06:38.285 user 0m45.649s 00:06:38.285 sys 0m2.698s 00:06:38.285 08:44:16 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.285 ************************************ 00:06:38.285 END TEST app_repeat 00:06:38.285 08:44:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.285 ************************************ 00:06:38.285 08:44:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:38.285 08:44:16 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:38.285 08:44:16 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.285 08:44:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.285 08:44:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.285 ************************************ 00:06:38.285 START TEST cpu_locks 00:06:38.285 ************************************ 00:06:38.285 08:44:16 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:38.544 * Looking for test storage... 00:06:38.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:38.544 08:44:16 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.544 08:44:16 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.544 08:44:16 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.544 08:44:16 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.544 08:44:16 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.544 08:44:16 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.544 08:44:16 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.544 08:44:16 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.544 08:44:16 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.544 08:44:16 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.544 08:44:16 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.544 08:44:16 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.544 08:44:16 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.544 08:44:16 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.545 08:44:16 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:38.545 08:44:16 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.545 08:44:16 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.545 --rc genhtml_branch_coverage=1 00:06:38.545 --rc genhtml_function_coverage=1 00:06:38.545 --rc genhtml_legend=1 00:06:38.545 --rc geninfo_all_blocks=1 00:06:38.545 --rc geninfo_unexecuted_blocks=1 00:06:38.545 00:06:38.545 ' 00:06:38.545 08:44:16 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.545 --rc genhtml_branch_coverage=1 00:06:38.545 --rc genhtml_function_coverage=1 00:06:38.545 --rc genhtml_legend=1 00:06:38.545 --rc geninfo_all_blocks=1 00:06:38.545 --rc geninfo_unexecuted_blocks=1 00:06:38.545 00:06:38.545 ' 00:06:38.545 08:44:16 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.545 --rc genhtml_branch_coverage=1 00:06:38.545 --rc genhtml_function_coverage=1 00:06:38.545 --rc genhtml_legend=1 00:06:38.545 --rc geninfo_all_blocks=1 00:06:38.545 --rc geninfo_unexecuted_blocks=1 00:06:38.545 00:06:38.545 ' 00:06:38.545 08:44:16 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.545 --rc genhtml_branch_coverage=1 00:06:38.545 --rc genhtml_function_coverage=1 00:06:38.545 --rc genhtml_legend=1 00:06:38.545 --rc geninfo_all_blocks=1 00:06:38.545 --rc geninfo_unexecuted_blocks=1 00:06:38.545 00:06:38.545 ' 00:06:38.545 08:44:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:38.545 08:44:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:38.545 08:44:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:38.545 08:44:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:38.545 08:44:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.545 08:44:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.545 08:44:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.545 ************************************ 00:06:38.545 START TEST default_locks 00:06:38.545 ************************************ 00:06:38.545 08:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:38.545 08:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59610 00:06:38.545 08:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.545 08:44:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59610 00:06:38.545 08:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59610 ']' 00:06:38.545 08:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.545 08:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.545 08:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.545 08:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.545 08:44:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.805 [2024-09-28 08:44:16.561392] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:38.805 [2024-09-28 08:44:16.561586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59610 ] 00:06:38.805 [2024-09-28 08:44:16.728805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.064 [2024-09-28 08:44:16.889982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.323 [2024-09-28 08:44:17.070250] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.582 08:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.582 08:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:39.582 08:44:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59610 00:06:39.582 08:44:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.582 08:44:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59610 00:06:40.150 08:44:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59610 00:06:40.150 08:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 59610 ']' 00:06:40.150 08:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 59610 00:06:40.150 08:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:40.150 08:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.150 08:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59610 00:06:40.150 08:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.150 08:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.150 killing process with pid 59610 00:06:40.150 08:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59610' 00:06:40.150 08:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 59610 00:06:40.150 08:44:17 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 59610 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59610 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59610 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59610 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59610 ']' 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.199 ERROR: process (pid: 59610) is no longer running 00:06:42.199 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59610) - No such process 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:42.199 00:06:42.199 real 0m3.601s 00:06:42.199 user 0m3.719s 00:06:42.199 sys 0m0.645s 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.199 08:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.199 ************************************ 00:06:42.199 END TEST default_locks 00:06:42.199 ************************************ 00:06:42.199 08:44:20 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:42.199 08:44:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.199 08:44:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.199 08:44:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.199 ************************************ 00:06:42.199 START TEST default_locks_via_rpc 00:06:42.199 ************************************ 00:06:42.199 08:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:42.199 08:44:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59685 00:06:42.199 08:44:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59685 00:06:42.199 08:44:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.199 08:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59685 ']' 00:06:42.199 08:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.199 08:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.199 08:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.199 08:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.199 08:44:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.458 [2024-09-28 08:44:20.184986] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:42.458 [2024-09-28 08:44:20.185186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59685 ] 00:06:42.459 [2024-09-28 08:44:20.342889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.718 [2024-09-28 08:44:20.505975] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.718 [2024-09-28 08:44:20.708180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59685 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.286 08:44:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59685 00:06:43.854 08:44:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59685 00:06:43.854 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 59685 ']' 00:06:43.854 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 59685 00:06:43.854 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:43.854 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.854 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59685 00:06:43.854 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.854 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.854 killing process with pid 59685 00:06:43.854 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59685' 00:06:43.854 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 59685 00:06:43.854 08:44:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 59685 00:06:45.759 00:06:45.759 real 0m3.570s 00:06:45.759 user 0m3.800s 00:06:45.759 sys 0m0.521s 00:06:45.759 08:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.759 08:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.759 ************************************ 00:06:45.759 END TEST default_locks_via_rpc 00:06:45.759 ************************************ 00:06:45.759 08:44:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:45.759 08:44:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.759 08:44:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.759 08:44:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.759 ************************************ 00:06:45.759 START TEST non_locking_app_on_locked_coremask 00:06:45.759 ************************************ 00:06:45.759 08:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:45.759 08:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59748 00:06:45.759 08:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59748 /var/tmp/spdk.sock 00:06:45.759 08:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.759 08:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59748 ']' 00:06:45.759 08:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.759 08:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.759 08:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.759 08:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.759 08:44:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.018 [2024-09-28 08:44:23.837729] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:46.018 [2024-09-28 08:44:23.837941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59748 ] 00:06:46.018 [2024-09-28 08:44:24.004941] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.277 [2024-09-28 08:44:24.163466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.536 [2024-09-28 08:44:24.360506] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.103 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.103 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:47.103 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59765 00:06:47.103 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:47.103 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59765 /var/tmp/spdk2.sock 00:06:47.103 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59765 ']' 00:06:47.103 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.103 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.103 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.103 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.103 08:44:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.103 [2024-09-28 08:44:24.929085] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:47.103 [2024-09-28 08:44:24.929271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59765 ] 00:06:47.103 [2024-09-28 08:44:25.089992] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.103 [2024-09-28 08:44:25.090067] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.670 [2024-09-28 08:44:25.426603] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.929 [2024-09-28 08:44:25.912382] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.830 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.830 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:49.830 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59748 00:06:49.830 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59748 00:06:49.830 08:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.763 08:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59748 00:06:50.763 08:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59748 ']' 00:06:50.763 08:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59748 00:06:50.763 08:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:50.763 08:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.763 08:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59748 00:06:50.763 killing process with pid 59748 00:06:50.763 08:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.763 08:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.763 08:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59748' 00:06:50.763 08:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59748 00:06:50.763 08:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59748 00:06:54.948 08:44:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59765 00:06:54.948 08:44:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59765 ']' 00:06:54.948 08:44:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59765 00:06:54.948 08:44:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:54.948 08:44:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.948 08:44:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59765 00:06:54.948 killing process with pid 59765 00:06:54.948 08:44:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.948 08:44:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.948 08:44:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59765' 00:06:54.948 08:44:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59765 00:06:54.948 08:44:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59765 00:06:56.855 ************************************ 00:06:56.855 END TEST non_locking_app_on_locked_coremask 00:06:56.855 ************************************ 00:06:56.855 00:06:56.855 real 0m10.861s 00:06:56.855 user 0m11.565s 00:06:56.855 sys 0m1.270s 00:06:56.855 08:44:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.855 08:44:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.855 08:44:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:56.855 08:44:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.855 08:44:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.855 08:44:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.855 ************************************ 00:06:56.855 START TEST locking_app_on_unlocked_coremask 00:06:56.855 ************************************ 00:06:56.855 08:44:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:56.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.855 08:44:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59914 00:06:56.855 08:44:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59914 /var/tmp/spdk.sock 00:06:56.855 08:44:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59914 ']' 00:06:56.855 08:44:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.855 08:44:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.855 08:44:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:56.855 08:44:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.855 08:44:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.855 08:44:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.855 [2024-09-28 08:44:34.752552] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:56.855 [2024-09-28 08:44:34.752733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59914 ] 00:06:57.114 [2024-09-28 08:44:34.923676] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.114 [2024-09-28 08:44:34.923734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.372 [2024-09-28 08:44:35.128601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.631 [2024-09-28 08:44:35.373379] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.198 08:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.198 08:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:58.198 08:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59931 00:06:58.198 08:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59931 /var/tmp/spdk2.sock 00:06:58.198 08:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:58.198 08:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59931 ']' 00:06:58.198 08:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.198 08:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.198 08:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.198 08:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.198 08:44:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.198 [2024-09-28 08:44:36.105878] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:58.198 [2024-09-28 08:44:36.106309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59931 ] 00:06:58.457 [2024-09-28 08:44:36.287418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.716 [2024-09-28 08:44:36.671163] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.284 [2024-09-28 08:44:37.153604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:01.217 08:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.217 08:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:01.217 08:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59931 00:07:01.217 08:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59931 00:07:01.217 08:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.154 08:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59914 00:07:02.154 08:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59914 ']' 00:07:02.154 08:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59914 00:07:02.154 08:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:02.154 08:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.154 08:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59914 00:07:02.154 killing process with pid 59914 00:07:02.154 08:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.154 08:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.154 08:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59914' 00:07:02.154 08:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59914 00:07:02.154 08:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59914 00:07:07.430 08:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59931 00:07:07.430 08:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59931 ']' 00:07:07.430 08:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59931 00:07:07.430 08:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:07.430 08:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.430 08:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59931 00:07:07.430 killing process with pid 59931 00:07:07.430 08:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.430 08:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.430 08:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59931' 00:07:07.430 08:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59931 00:07:07.430 08:44:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59931 00:07:08.808 00:07:08.808 real 0m12.150s 00:07:08.808 user 0m12.895s 00:07:08.808 sys 0m1.400s 00:07:08.808 08:44:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.808 08:44:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.808 ************************************ 00:07:08.808 END TEST locking_app_on_unlocked_coremask 00:07:08.808 ************************************ 00:07:09.067 08:44:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:09.067 08:44:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.067 08:44:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.067 08:44:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.067 ************************************ 00:07:09.067 START TEST locking_app_on_locked_coremask 00:07:09.067 ************************************ 00:07:09.067 08:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:09.067 08:44:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60088 00:07:09.067 08:44:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:09.067 08:44:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60088 /var/tmp/spdk.sock 00:07:09.067 08:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60088 ']' 00:07:09.067 08:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.067 08:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.067 08:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.067 08:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.067 08:44:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.067 [2024-09-28 08:44:46.953340] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:09.067 [2024-09-28 08:44:46.953521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60088 ] 00:07:09.326 [2024-09-28 08:44:47.120360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.327 [2024-09-28 08:44:47.287192] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.585 [2024-09-28 08:44:47.487683] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60110 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60110 /var/tmp/spdk2.sock 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60110 /var/tmp/spdk2.sock 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60110 /var/tmp/spdk2.sock 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 60110 ']' 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.153 08:44:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.153 [2024-09-28 08:44:48.052461] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:10.153 [2024-09-28 08:44:48.052904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60110 ] 00:07:10.411 [2024-09-28 08:44:48.214714] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60088 has claimed it. 00:07:10.411 [2024-09-28 08:44:48.214800] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:10.978 ERROR: process (pid: 60110) is no longer running 00:07:10.978 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60110) - No such process 00:07:10.978 08:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.978 08:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:10.978 08:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:10.978 08:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:10.978 08:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:10.978 08:44:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:10.978 08:44:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60088 00:07:10.978 08:44:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60088 00:07:10.978 08:44:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.237 08:44:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60088 00:07:11.237 08:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 60088 ']' 00:07:11.237 08:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 60088 00:07:11.496 08:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:11.496 08:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.496 08:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60088 00:07:11.496 killing process with pid 60088 00:07:11.496 08:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.496 08:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.496 08:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60088' 00:07:11.496 08:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 60088 00:07:11.496 08:44:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 60088 00:07:13.401 ************************************ 00:07:13.401 END TEST locking_app_on_locked_coremask 00:07:13.401 ************************************ 00:07:13.401 00:07:13.401 real 0m4.307s 00:07:13.401 user 0m4.747s 00:07:13.401 sys 0m0.764s 00:07:13.401 08:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.401 08:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.401 08:44:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:13.401 08:44:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.401 08:44:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.401 08:44:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.401 ************************************ 00:07:13.401 START TEST locking_overlapped_coremask 00:07:13.401 ************************************ 00:07:13.401 08:44:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:13.401 08:44:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60174 00:07:13.401 08:44:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60174 /var/tmp/spdk.sock 00:07:13.401 08:44:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:13.401 08:44:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60174 ']' 00:07:13.401 08:44:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.401 08:44:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.401 08:44:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.401 08:44:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.401 08:44:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.401 [2024-09-28 08:44:51.315316] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:13.401 [2024-09-28 08:44:51.315499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60174 ] 00:07:13.660 [2024-09-28 08:44:51.485809] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.660 [2024-09-28 08:44:51.644319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.660 [2024-09-28 08:44:51.644428] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.660 [2024-09-28 08:44:51.644446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.920 [2024-09-28 08:44:51.847286] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60192 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60192 /var/tmp/spdk2.sock 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60192 /var/tmp/spdk2.sock 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60192 /var/tmp/spdk2.sock 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 60192 ']' 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.488 08:44:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.488 [2024-09-28 08:44:52.460639] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:14.488 [2024-09-28 08:44:52.461148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60192 ] 00:07:14.747 [2024-09-28 08:44:52.638237] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60174 has claimed it. 00:07:14.747 [2024-09-28 08:44:52.638340] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:15.315 ERROR: process (pid: 60192) is no longer running 00:07:15.315 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (60192) - No such process 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60174 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 60174 ']' 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 60174 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60174 00:07:15.315 killing process with pid 60174 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60174' 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 60174 00:07:15.315 08:44:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 60174 00:07:17.852 00:07:17.852 real 0m4.051s 00:07:17.852 user 0m10.827s 00:07:17.852 sys 0m0.553s 00:07:17.852 ************************************ 00:07:17.852 END TEST locking_overlapped_coremask 00:07:17.852 ************************************ 00:07:17.852 08:44:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.852 08:44:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.852 08:44:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:17.852 08:44:55 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.852 08:44:55 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.852 08:44:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.852 ************************************ 00:07:17.852 START TEST locking_overlapped_coremask_via_rpc 00:07:17.852 ************************************ 00:07:17.852 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:17.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.852 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60256 00:07:17.852 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60256 /var/tmp/spdk.sock 00:07:17.852 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:17.852 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60256 ']' 00:07:17.852 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.852 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.852 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.852 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.852 08:44:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.852 [2024-09-28 08:44:55.405897] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:17.852 [2024-09-28 08:44:55.406075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60256 ] 00:07:17.852 [2024-09-28 08:44:55.574523] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:17.852 [2024-09-28 08:44:55.574577] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.852 [2024-09-28 08:44:55.765760] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.852 [2024-09-28 08:44:55.765904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.852 [2024-09-28 08:44:55.765915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.118 [2024-09-28 08:44:55.983765] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.689 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.689 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:18.689 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60274 00:07:18.689 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:18.689 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60274 /var/tmp/spdk2.sock 00:07:18.689 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60274 ']' 00:07:18.689 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:18.689 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.689 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:18.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:18.689 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.689 08:44:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.689 [2024-09-28 08:44:56.592865] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:18.689 [2024-09-28 08:44:56.593361] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60274 ] 00:07:18.948 [2024-09-28 08:44:56.767893] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.948 [2024-09-28 08:44:56.767962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.207 [2024-09-28 08:44:57.104568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.207 [2024-09-28 08:44:57.107994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.207 [2024-09-28 08:44:57.108011] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:19.775 [2024-09-28 08:44:57.525687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.714 [2024-09-28 08:44:58.539073] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60256 has claimed it. 00:07:20.714 request: 00:07:20.714 { 00:07:20.714 "method": "framework_enable_cpumask_locks", 00:07:20.714 "req_id": 1 00:07:20.714 } 00:07:20.714 Got JSON-RPC error response 00:07:20.714 response: 00:07:20.714 { 00:07:20.714 "code": -32603, 00:07:20.714 "message": "Failed to claim CPU core: 2" 00:07:20.714 } 00:07:20.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60256 /var/tmp/spdk.sock 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60256 ']' 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.714 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.974 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.974 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:20.974 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60274 /var/tmp/spdk2.sock 00:07:20.974 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 60274 ']' 00:07:20.974 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.974 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.974 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.974 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.974 08:44:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.234 ************************************ 00:07:21.234 END TEST locking_overlapped_coremask_via_rpc 00:07:21.234 ************************************ 00:07:21.234 08:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.234 08:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:21.234 08:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:21.234 08:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:21.234 08:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:21.234 08:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:21.234 00:07:21.234 real 0m3.839s 00:07:21.234 user 0m1.542s 00:07:21.234 sys 0m0.156s 00:07:21.234 08:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.234 08:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.234 08:44:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:21.234 08:44:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60256 ]] 00:07:21.234 08:44:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60256 00:07:21.234 08:44:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60256 ']' 00:07:21.234 08:44:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60256 00:07:21.234 08:44:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:21.234 08:44:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.234 08:44:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60256 00:07:21.234 08:44:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.234 killing process with pid 60256 00:07:21.234 08:44:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.234 08:44:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60256' 00:07:21.234 08:44:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60256 00:07:21.234 08:44:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60256 00:07:23.832 08:45:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60274 ]] 00:07:23.832 08:45:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60274 00:07:23.832 08:45:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60274 ']' 00:07:23.832 08:45:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60274 00:07:23.832 08:45:01 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:23.832 08:45:01 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.832 08:45:01 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60274 00:07:23.832 killing process with pid 60274 00:07:23.832 08:45:01 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:23.832 08:45:01 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:23.832 08:45:01 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60274' 00:07:23.832 08:45:01 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 60274 00:07:23.832 08:45:01 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 60274 00:07:25.738 08:45:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:25.738 Process with pid 60256 is not found 00:07:25.738 08:45:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:25.738 08:45:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60256 ]] 00:07:25.738 08:45:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60256 00:07:25.738 08:45:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60256 ']' 00:07:25.738 08:45:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60256 00:07:25.738 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60256) - No such process 00:07:25.738 08:45:03 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60256 is not found' 00:07:25.738 08:45:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60274 ]] 00:07:25.738 08:45:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60274 00:07:25.738 08:45:03 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 60274 ']' 00:07:25.738 08:45:03 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 60274 00:07:25.738 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (60274) - No such process 00:07:25.738 Process with pid 60274 is not found 00:07:25.738 08:45:03 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 60274 is not found' 00:07:25.738 08:45:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:25.738 00:07:25.738 real 0m47.177s 00:07:25.738 user 1m19.144s 00:07:25.738 sys 0m6.339s 00:07:25.738 ************************************ 00:07:25.738 END TEST cpu_locks 00:07:25.738 ************************************ 00:07:25.738 08:45:03 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.738 08:45:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.738 ************************************ 00:07:25.738 END TEST event 00:07:25.738 ************************************ 00:07:25.738 00:07:25.739 real 1m19.771s 00:07:25.739 user 2m24.527s 00:07:25.739 sys 0m10.034s 00:07:25.739 08:45:03 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.739 08:45:03 event -- common/autotest_common.sh@10 -- # set +x 00:07:25.739 08:45:03 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:25.739 08:45:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:25.739 08:45:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.739 08:45:03 -- common/autotest_common.sh@10 -- # set +x 00:07:25.739 ************************************ 00:07:25.739 START TEST thread 00:07:25.739 ************************************ 00:07:25.739 08:45:03 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:25.739 * Looking for test storage... 00:07:25.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:25.739 08:45:03 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:25.739 08:45:03 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:25.739 08:45:03 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:25.739 08:45:03 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:25.739 08:45:03 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.739 08:45:03 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.739 08:45:03 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.739 08:45:03 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.739 08:45:03 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.739 08:45:03 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.739 08:45:03 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.739 08:45:03 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.739 08:45:03 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.739 08:45:03 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.739 08:45:03 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.739 08:45:03 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:25.739 08:45:03 thread -- scripts/common.sh@345 -- # : 1 00:07:25.739 08:45:03 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.739 08:45:03 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.739 08:45:03 thread -- scripts/common.sh@365 -- # decimal 1 00:07:25.739 08:45:03 thread -- scripts/common.sh@353 -- # local d=1 00:07:25.739 08:45:03 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.739 08:45:03 thread -- scripts/common.sh@355 -- # echo 1 00:07:25.739 08:45:03 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.739 08:45:03 thread -- scripts/common.sh@366 -- # decimal 2 00:07:25.739 08:45:03 thread -- scripts/common.sh@353 -- # local d=2 00:07:25.739 08:45:03 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.739 08:45:03 thread -- scripts/common.sh@355 -- # echo 2 00:07:25.739 08:45:03 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.739 08:45:03 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.739 08:45:03 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.739 08:45:03 thread -- scripts/common.sh@368 -- # return 0 00:07:25.739 08:45:03 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.739 08:45:03 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:25.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.739 --rc genhtml_branch_coverage=1 00:07:25.739 --rc genhtml_function_coverage=1 00:07:25.739 --rc genhtml_legend=1 00:07:25.739 --rc geninfo_all_blocks=1 00:07:25.739 --rc geninfo_unexecuted_blocks=1 00:07:25.739 00:07:25.739 ' 00:07:25.739 08:45:03 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:25.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.739 --rc genhtml_branch_coverage=1 00:07:25.739 --rc genhtml_function_coverage=1 00:07:25.739 --rc genhtml_legend=1 00:07:25.739 --rc geninfo_all_blocks=1 00:07:25.739 --rc geninfo_unexecuted_blocks=1 00:07:25.739 00:07:25.739 ' 00:07:25.739 08:45:03 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:25.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.739 --rc genhtml_branch_coverage=1 00:07:25.739 --rc genhtml_function_coverage=1 00:07:25.739 --rc genhtml_legend=1 00:07:25.739 --rc geninfo_all_blocks=1 00:07:25.739 --rc geninfo_unexecuted_blocks=1 00:07:25.739 00:07:25.739 ' 00:07:25.739 08:45:03 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:25.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.739 --rc genhtml_branch_coverage=1 00:07:25.739 --rc genhtml_function_coverage=1 00:07:25.739 --rc genhtml_legend=1 00:07:25.739 --rc geninfo_all_blocks=1 00:07:25.739 --rc geninfo_unexecuted_blocks=1 00:07:25.739 00:07:25.739 ' 00:07:25.739 08:45:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:25.739 08:45:03 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:25.739 08:45:03 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.739 08:45:03 thread -- common/autotest_common.sh@10 -- # set +x 00:07:25.739 ************************************ 00:07:25.739 START TEST thread_poller_perf 00:07:25.739 ************************************ 00:07:25.739 08:45:03 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:25.998 [2024-09-28 08:45:03.759359] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:25.998 [2024-09-28 08:45:03.759537] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60456 ] 00:07:25.998 [2024-09-28 08:45:03.931694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.257 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:26.258 [2024-09-28 08:45:04.086156] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.637 ====================================== 00:07:27.637 busy:2208471194 (cyc) 00:07:27.637 total_run_count: 362000 00:07:27.637 tsc_hz: 2200000000 (cyc) 00:07:27.637 ====================================== 00:07:27.637 poller_cost: 6100 (cyc), 2772 (nsec) 00:07:27.637 00:07:27.637 real 0m1.728s 00:07:27.637 user 0m1.523s 00:07:27.637 sys 0m0.097s 00:07:27.637 08:45:05 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.637 08:45:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:27.637 ************************************ 00:07:27.637 END TEST thread_poller_perf 00:07:27.637 ************************************ 00:07:27.637 08:45:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:27.637 08:45:05 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:27.637 08:45:05 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.637 08:45:05 thread -- common/autotest_common.sh@10 -- # set +x 00:07:27.637 ************************************ 00:07:27.637 START TEST thread_poller_perf 00:07:27.637 ************************************ 00:07:27.637 08:45:05 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:27.637 [2024-09-28 08:45:05.522147] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:27.637 [2024-09-28 08:45:05.522289] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60493 ] 00:07:27.896 [2024-09-28 08:45:05.672676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.896 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:27.896 [2024-09-28 08:45:05.842185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.273 ====================================== 00:07:29.273 busy:2203302206 (cyc) 00:07:29.273 total_run_count: 4434000 00:07:29.273 tsc_hz: 2200000000 (cyc) 00:07:29.273 ====================================== 00:07:29.273 poller_cost: 496 (cyc), 225 (nsec) 00:07:29.273 00:07:29.273 real 0m1.676s 00:07:29.273 user 0m1.488s 00:07:29.273 sys 0m0.079s 00:07:29.273 08:45:07 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.273 ************************************ 00:07:29.273 END TEST thread_poller_perf 00:07:29.273 ************************************ 00:07:29.273 08:45:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:29.273 08:45:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:29.273 ************************************ 00:07:29.273 END TEST thread 00:07:29.273 ************************************ 00:07:29.273 00:07:29.273 real 0m3.697s 00:07:29.273 user 0m3.161s 00:07:29.273 sys 0m0.314s 00:07:29.273 08:45:07 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.273 08:45:07 thread -- common/autotest_common.sh@10 -- # set +x 00:07:29.273 08:45:07 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:29.273 08:45:07 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:29.273 08:45:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.273 08:45:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.273 08:45:07 -- common/autotest_common.sh@10 -- # set +x 00:07:29.273 ************************************ 00:07:29.273 START TEST app_cmdline 00:07:29.273 ************************************ 00:07:29.273 08:45:07 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:29.532 * Looking for test storage... 00:07:29.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:29.532 08:45:07 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:29.532 08:45:07 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:29.532 08:45:07 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:29.532 08:45:07 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:29.532 08:45:07 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:29.533 08:45:07 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:29.533 08:45:07 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.533 08:45:07 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:29.533 08:45:07 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:29.533 08:45:07 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:29.533 08:45:07 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:29.533 08:45:07 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:29.533 08:45:07 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.533 08:45:07 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:29.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.533 --rc genhtml_branch_coverage=1 00:07:29.533 --rc genhtml_function_coverage=1 00:07:29.533 --rc genhtml_legend=1 00:07:29.533 --rc geninfo_all_blocks=1 00:07:29.533 --rc geninfo_unexecuted_blocks=1 00:07:29.533 00:07:29.533 ' 00:07:29.533 08:45:07 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:29.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.533 --rc genhtml_branch_coverage=1 00:07:29.533 --rc genhtml_function_coverage=1 00:07:29.533 --rc genhtml_legend=1 00:07:29.533 --rc geninfo_all_blocks=1 00:07:29.533 --rc geninfo_unexecuted_blocks=1 00:07:29.533 00:07:29.533 ' 00:07:29.533 08:45:07 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:29.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.533 --rc genhtml_branch_coverage=1 00:07:29.533 --rc genhtml_function_coverage=1 00:07:29.533 --rc genhtml_legend=1 00:07:29.533 --rc geninfo_all_blocks=1 00:07:29.533 --rc geninfo_unexecuted_blocks=1 00:07:29.533 00:07:29.533 ' 00:07:29.533 08:45:07 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:29.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.533 --rc genhtml_branch_coverage=1 00:07:29.533 --rc genhtml_function_coverage=1 00:07:29.533 --rc genhtml_legend=1 00:07:29.533 --rc geninfo_all_blocks=1 00:07:29.533 --rc geninfo_unexecuted_blocks=1 00:07:29.533 00:07:29.533 ' 00:07:29.533 08:45:07 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:29.533 08:45:07 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60582 00:07:29.533 08:45:07 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60582 00:07:29.533 08:45:07 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:29.533 08:45:07 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 60582 ']' 00:07:29.533 08:45:07 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.533 08:45:07 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.533 08:45:07 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.533 08:45:07 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.533 08:45:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.792 [2024-09-28 08:45:07.585210] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:29.792 [2024-09-28 08:45:07.585393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60582 ] 00:07:29.792 [2024-09-28 08:45:07.753785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.052 [2024-09-28 08:45:07.921601] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.311 [2024-09-28 08:45:08.101895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.569 08:45:08 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.569 08:45:08 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:30.569 08:45:08 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:30.828 { 00:07:30.828 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:07:30.828 "fields": { 00:07:30.828 "major": 25, 00:07:30.828 "minor": 1, 00:07:30.828 "patch": 0, 00:07:30.828 "suffix": "-pre", 00:07:30.828 "commit": "09cc66129" 00:07:30.828 } 00:07:30.828 } 00:07:30.828 08:45:08 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:30.828 08:45:08 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:30.828 08:45:08 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:30.828 08:45:08 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:30.828 08:45:08 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:30.828 08:45:08 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:30.828 08:45:08 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.828 08:45:08 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:30.828 08:45:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:30.828 08:45:08 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.828 08:45:08 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:30.828 08:45:08 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:30.828 08:45:08 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.828 08:45:08 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:30.828 08:45:08 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.828 08:45:08 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.828 08:45:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:30.828 08:45:08 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:31.087 08:45:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.087 08:45:08 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:31.087 08:45:08 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.087 08:45:08 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:31.087 08:45:08 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:31.087 08:45:08 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:31.347 request: 00:07:31.347 { 00:07:31.347 "method": "env_dpdk_get_mem_stats", 00:07:31.347 "req_id": 1 00:07:31.347 } 00:07:31.347 Got JSON-RPC error response 00:07:31.347 response: 00:07:31.347 { 00:07:31.347 "code": -32601, 00:07:31.347 "message": "Method not found" 00:07:31.347 } 00:07:31.347 08:45:09 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:31.347 08:45:09 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:31.347 08:45:09 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:31.347 08:45:09 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:31.347 08:45:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60582 00:07:31.347 08:45:09 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 60582 ']' 00:07:31.347 08:45:09 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 60582 00:07:31.347 08:45:09 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:31.347 08:45:09 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.347 08:45:09 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60582 00:07:31.347 killing process with pid 60582 00:07:31.347 08:45:09 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.347 08:45:09 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.347 08:45:09 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60582' 00:07:31.347 08:45:09 app_cmdline -- common/autotest_common.sh@969 -- # kill 60582 00:07:31.347 08:45:09 app_cmdline -- common/autotest_common.sh@974 -- # wait 60582 00:07:33.254 00:07:33.254 real 0m3.699s 00:07:33.254 user 0m4.177s 00:07:33.254 sys 0m0.504s 00:07:33.254 08:45:10 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.254 ************************************ 00:07:33.254 END TEST app_cmdline 00:07:33.254 ************************************ 00:07:33.254 08:45:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:33.254 08:45:10 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:33.254 08:45:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.254 08:45:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.254 08:45:10 -- common/autotest_common.sh@10 -- # set +x 00:07:33.254 ************************************ 00:07:33.254 START TEST version 00:07:33.254 ************************************ 00:07:33.254 08:45:11 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:33.254 * Looking for test storage... 00:07:33.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:33.254 08:45:11 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:33.254 08:45:11 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:33.254 08:45:11 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:33.254 08:45:11 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:33.254 08:45:11 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.254 08:45:11 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.254 08:45:11 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.254 08:45:11 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.254 08:45:11 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.254 08:45:11 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.254 08:45:11 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.254 08:45:11 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.254 08:45:11 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.254 08:45:11 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.254 08:45:11 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.254 08:45:11 version -- scripts/common.sh@344 -- # case "$op" in 00:07:33.254 08:45:11 version -- scripts/common.sh@345 -- # : 1 00:07:33.254 08:45:11 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.254 08:45:11 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.254 08:45:11 version -- scripts/common.sh@365 -- # decimal 1 00:07:33.254 08:45:11 version -- scripts/common.sh@353 -- # local d=1 00:07:33.254 08:45:11 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.254 08:45:11 version -- scripts/common.sh@355 -- # echo 1 00:07:33.254 08:45:11 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.254 08:45:11 version -- scripts/common.sh@366 -- # decimal 2 00:07:33.254 08:45:11 version -- scripts/common.sh@353 -- # local d=2 00:07:33.254 08:45:11 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.254 08:45:11 version -- scripts/common.sh@355 -- # echo 2 00:07:33.254 08:45:11 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.254 08:45:11 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.254 08:45:11 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.254 08:45:11 version -- scripts/common.sh@368 -- # return 0 00:07:33.254 08:45:11 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.254 08:45:11 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:33.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.254 --rc genhtml_branch_coverage=1 00:07:33.254 --rc genhtml_function_coverage=1 00:07:33.254 --rc genhtml_legend=1 00:07:33.254 --rc geninfo_all_blocks=1 00:07:33.254 --rc geninfo_unexecuted_blocks=1 00:07:33.254 00:07:33.254 ' 00:07:33.254 08:45:11 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:33.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.254 --rc genhtml_branch_coverage=1 00:07:33.254 --rc genhtml_function_coverage=1 00:07:33.254 --rc genhtml_legend=1 00:07:33.254 --rc geninfo_all_blocks=1 00:07:33.254 --rc geninfo_unexecuted_blocks=1 00:07:33.254 00:07:33.254 ' 00:07:33.254 08:45:11 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:33.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.254 --rc genhtml_branch_coverage=1 00:07:33.254 --rc genhtml_function_coverage=1 00:07:33.254 --rc genhtml_legend=1 00:07:33.254 --rc geninfo_all_blocks=1 00:07:33.254 --rc geninfo_unexecuted_blocks=1 00:07:33.254 00:07:33.254 ' 00:07:33.254 08:45:11 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:33.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.254 --rc genhtml_branch_coverage=1 00:07:33.254 --rc genhtml_function_coverage=1 00:07:33.254 --rc genhtml_legend=1 00:07:33.254 --rc geninfo_all_blocks=1 00:07:33.254 --rc geninfo_unexecuted_blocks=1 00:07:33.254 00:07:33.254 ' 00:07:33.254 08:45:11 version -- app/version.sh@17 -- # get_header_version major 00:07:33.254 08:45:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.254 08:45:11 version -- app/version.sh@14 -- # cut -f2 00:07:33.254 08:45:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:33.254 08:45:11 version -- app/version.sh@17 -- # major=25 00:07:33.254 08:45:11 version -- app/version.sh@18 -- # get_header_version minor 00:07:33.254 08:45:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.254 08:45:11 version -- app/version.sh@14 -- # cut -f2 00:07:33.254 08:45:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:33.254 08:45:11 version -- app/version.sh@18 -- # minor=1 00:07:33.254 08:45:11 version -- app/version.sh@19 -- # get_header_version patch 00:07:33.254 08:45:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.254 08:45:11 version -- app/version.sh@14 -- # cut -f2 00:07:33.254 08:45:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:33.254 08:45:11 version -- app/version.sh@19 -- # patch=0 00:07:33.254 08:45:11 version -- app/version.sh@20 -- # get_header_version suffix 00:07:33.254 08:45:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:33.254 08:45:11 version -- app/version.sh@14 -- # cut -f2 00:07:33.254 08:45:11 version -- app/version.sh@14 -- # tr -d '"' 00:07:33.254 08:45:11 version -- app/version.sh@20 -- # suffix=-pre 00:07:33.254 08:45:11 version -- app/version.sh@22 -- # version=25.1 00:07:33.254 08:45:11 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:33.254 08:45:11 version -- app/version.sh@28 -- # version=25.1rc0 00:07:33.255 08:45:11 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:33.255 08:45:11 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:33.515 08:45:11 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:33.515 08:45:11 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:33.515 00:07:33.515 real 0m0.245s 00:07:33.515 user 0m0.153s 00:07:33.515 sys 0m0.129s 00:07:33.515 ************************************ 00:07:33.515 END TEST version 00:07:33.515 ************************************ 00:07:33.515 08:45:11 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.515 08:45:11 version -- common/autotest_common.sh@10 -- # set +x 00:07:33.515 08:45:11 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:33.515 08:45:11 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:33.515 08:45:11 -- spdk/autotest.sh@194 -- # uname -s 00:07:33.515 08:45:11 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:33.515 08:45:11 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:33.515 08:45:11 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:33.515 08:45:11 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:33.515 08:45:11 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:33.515 08:45:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:33.515 08:45:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:33.515 08:45:11 -- common/autotest_common.sh@10 -- # set +x 00:07:33.515 ************************************ 00:07:33.515 START TEST spdk_dd 00:07:33.515 ************************************ 00:07:33.515 08:45:11 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:33.515 * Looking for test storage... 00:07:33.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:33.515 08:45:11 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:33.515 08:45:11 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:07:33.515 08:45:11 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:33.515 08:45:11 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:33.515 08:45:11 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.515 08:45:11 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:33.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.515 --rc genhtml_branch_coverage=1 00:07:33.515 --rc genhtml_function_coverage=1 00:07:33.515 --rc genhtml_legend=1 00:07:33.515 --rc geninfo_all_blocks=1 00:07:33.515 --rc geninfo_unexecuted_blocks=1 00:07:33.515 00:07:33.515 ' 00:07:33.515 08:45:11 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:33.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.515 --rc genhtml_branch_coverage=1 00:07:33.515 --rc genhtml_function_coverage=1 00:07:33.515 --rc genhtml_legend=1 00:07:33.515 --rc geninfo_all_blocks=1 00:07:33.515 --rc geninfo_unexecuted_blocks=1 00:07:33.515 00:07:33.515 ' 00:07:33.515 08:45:11 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:33.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.515 --rc genhtml_branch_coverage=1 00:07:33.515 --rc genhtml_function_coverage=1 00:07:33.515 --rc genhtml_legend=1 00:07:33.515 --rc geninfo_all_blocks=1 00:07:33.515 --rc geninfo_unexecuted_blocks=1 00:07:33.515 00:07:33.515 ' 00:07:33.515 08:45:11 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:33.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.515 --rc genhtml_branch_coverage=1 00:07:33.515 --rc genhtml_function_coverage=1 00:07:33.515 --rc genhtml_legend=1 00:07:33.515 --rc geninfo_all_blocks=1 00:07:33.515 --rc geninfo_unexecuted_blocks=1 00:07:33.515 00:07:33.515 ' 00:07:33.515 08:45:11 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.515 08:45:11 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.516 08:45:11 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.516 08:45:11 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.516 08:45:11 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.516 08:45:11 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:33.516 08:45:11 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.516 08:45:11 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:34.086 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:34.086 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:34.086 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:34.086 08:45:11 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:34.086 08:45:11 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:34.086 08:45:11 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:34.086 08:45:11 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:34.086 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:34.087 * spdk_dd linked to liburing 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:34.087 08:45:11 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:34.087 08:45:11 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:07:34.088 08:45:11 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:07:34.088 08:45:11 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:34.088 08:45:11 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:34.088 08:45:11 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:34.088 08:45:11 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:34.088 08:45:11 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:34.088 08:45:11 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:34.088 08:45:11 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:34.088 08:45:11 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.088 08:45:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:34.088 ************************************ 00:07:34.088 START TEST spdk_dd_basic_rw 00:07:34.088 ************************************ 00:07:34.088 08:45:11 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:34.088 * Looking for test storage... 00:07:34.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:34.088 08:45:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:34.088 08:45:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:07:34.088 08:45:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:34.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.348 --rc genhtml_branch_coverage=1 00:07:34.348 --rc genhtml_function_coverage=1 00:07:34.348 --rc genhtml_legend=1 00:07:34.348 --rc geninfo_all_blocks=1 00:07:34.348 --rc geninfo_unexecuted_blocks=1 00:07:34.348 00:07:34.348 ' 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:34.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.348 --rc genhtml_branch_coverage=1 00:07:34.348 --rc genhtml_function_coverage=1 00:07:34.348 --rc genhtml_legend=1 00:07:34.348 --rc geninfo_all_blocks=1 00:07:34.348 --rc geninfo_unexecuted_blocks=1 00:07:34.348 00:07:34.348 ' 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:34.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.348 --rc genhtml_branch_coverage=1 00:07:34.348 --rc genhtml_function_coverage=1 00:07:34.348 --rc genhtml_legend=1 00:07:34.348 --rc geninfo_all_blocks=1 00:07:34.348 --rc geninfo_unexecuted_blocks=1 00:07:34.348 00:07:34.348 ' 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:34.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.348 --rc genhtml_branch_coverage=1 00:07:34.348 --rc genhtml_function_coverage=1 00:07:34.348 --rc genhtml_legend=1 00:07:34.348 --rc geninfo_all_blocks=1 00:07:34.348 --rc geninfo_unexecuted_blocks=1 00:07:34.348 00:07:34.348 ' 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:34.348 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:34.349 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:34.349 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:34.349 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:34.349 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:34.349 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:34.349 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:34.349 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:34.349 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:34.349 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:34.349 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:34.610 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:34.610 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.611 ************************************ 00:07:34.611 START TEST dd_bs_lt_native_bs 00:07:34.611 ************************************ 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:34.611 08:45:12 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:34.611 { 00:07:34.611 "subsystems": [ 00:07:34.611 { 00:07:34.611 "subsystem": "bdev", 00:07:34.611 "config": [ 00:07:34.611 { 00:07:34.611 "params": { 00:07:34.611 "trtype": "pcie", 00:07:34.611 "traddr": "0000:00:10.0", 00:07:34.611 "name": "Nvme0" 00:07:34.611 }, 00:07:34.611 "method": "bdev_nvme_attach_controller" 00:07:34.611 }, 00:07:34.611 { 00:07:34.611 "method": "bdev_wait_for_examine" 00:07:34.611 } 00:07:34.611 ] 00:07:34.611 } 00:07:34.611 ] 00:07:34.611 } 00:07:34.612 [2024-09-28 08:45:12.579185] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:34.612 [2024-09-28 08:45:12.579577] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60952 ] 00:07:34.870 [2024-09-28 08:45:12.752117] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.129 [2024-09-28 08:45:12.979883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.388 [2024-09-28 08:45:13.149860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.388 [2024-09-28 08:45:13.303282] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:35.388 [2024-09-28 08:45:13.303365] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.991 [2024-09-28 08:45:13.725611] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:36.250 00:07:36.250 real 0m1.626s 00:07:36.250 user 0m1.367s 00:07:36.250 sys 0m0.209s 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.250 ************************************ 00:07:36.250 END TEST dd_bs_lt_native_bs 00:07:36.250 ************************************ 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.250 ************************************ 00:07:36.250 START TEST dd_rw 00:07:36.250 ************************************ 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:36.250 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.817 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:36.817 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:36.817 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.817 08:45:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.817 { 00:07:36.817 "subsystems": [ 00:07:36.817 { 00:07:36.817 "subsystem": "bdev", 00:07:36.817 "config": [ 00:07:36.817 { 00:07:36.817 "params": { 00:07:36.817 "trtype": "pcie", 00:07:36.817 "traddr": "0000:00:10.0", 00:07:36.817 "name": "Nvme0" 00:07:36.817 }, 00:07:36.817 "method": "bdev_nvme_attach_controller" 00:07:36.817 }, 00:07:36.817 { 00:07:36.817 "method": "bdev_wait_for_examine" 00:07:36.817 } 00:07:36.817 ] 00:07:36.817 } 00:07:36.817 ] 00:07:36.817 } 00:07:37.076 [2024-09-28 08:45:14.826090] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:37.076 [2024-09-28 08:45:14.826269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60995 ] 00:07:37.076 [2024-09-28 08:45:14.997601] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.335 [2024-09-28 08:45:15.154313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.335 [2024-09-28 08:45:15.308074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.531  Copying: 60/60 [kB] (average 14 MBps) 00:07:38.531 00:07:38.531 08:45:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:38.531 08:45:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:38.531 08:45:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.531 08:45:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.790 { 00:07:38.790 "subsystems": [ 00:07:38.790 { 00:07:38.790 "subsystem": "bdev", 00:07:38.790 "config": [ 00:07:38.790 { 00:07:38.790 "params": { 00:07:38.790 "trtype": "pcie", 00:07:38.790 "traddr": "0000:00:10.0", 00:07:38.790 "name": "Nvme0" 00:07:38.790 }, 00:07:38.790 "method": "bdev_nvme_attach_controller" 00:07:38.790 }, 00:07:38.790 { 00:07:38.790 "method": "bdev_wait_for_examine" 00:07:38.790 } 00:07:38.790 ] 00:07:38.790 } 00:07:38.790 ] 00:07:38.790 } 00:07:38.790 [2024-09-28 08:45:16.630749] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:38.790 [2024-09-28 08:45:16.630966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61026 ] 00:07:39.048 [2024-09-28 08:45:16.798620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.048 [2024-09-28 08:45:16.961025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.307 [2024-09-28 08:45:17.120596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.244  Copying: 60/60 [kB] (average 11 MBps) 00:07:40.244 00:07:40.244 08:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.244 08:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:40.244 08:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:40.244 08:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:40.244 08:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:40.244 08:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:40.244 08:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:40.244 08:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:40.244 08:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:40.244 08:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:40.244 08:45:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.244 { 00:07:40.244 "subsystems": [ 00:07:40.244 { 00:07:40.244 "subsystem": "bdev", 00:07:40.244 "config": [ 00:07:40.244 { 00:07:40.244 "params": { 00:07:40.244 "trtype": "pcie", 00:07:40.244 "traddr": "0000:00:10.0", 00:07:40.244 "name": "Nvme0" 00:07:40.244 }, 00:07:40.244 "method": "bdev_nvme_attach_controller" 00:07:40.244 }, 00:07:40.244 { 00:07:40.244 "method": "bdev_wait_for_examine" 00:07:40.244 } 00:07:40.244 ] 00:07:40.244 } 00:07:40.244 ] 00:07:40.244 } 00:07:40.244 [2024-09-28 08:45:18.225099] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:40.244 [2024-09-28 08:45:18.225551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61054 ] 00:07:40.503 [2024-09-28 08:45:18.390730] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.763 [2024-09-28 08:45:18.540142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.763 [2024-09-28 08:45:18.685233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.958  Copying: 1024/1024 [kB] (average 500 MBps) 00:07:41.958 00:07:41.958 08:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:41.958 08:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:41.958 08:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:41.958 08:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:41.958 08:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:41.958 08:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:41.958 08:45:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.525 08:45:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:42.525 08:45:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:42.525 08:45:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:42.525 08:45:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.525 { 00:07:42.525 "subsystems": [ 00:07:42.525 { 00:07:42.525 "subsystem": "bdev", 00:07:42.525 "config": [ 00:07:42.525 { 00:07:42.525 "params": { 00:07:42.525 "trtype": "pcie", 00:07:42.525 "traddr": "0000:00:10.0", 00:07:42.525 "name": "Nvme0" 00:07:42.525 }, 00:07:42.525 "method": "bdev_nvme_attach_controller" 00:07:42.525 }, 00:07:42.525 { 00:07:42.525 "method": "bdev_wait_for_examine" 00:07:42.525 } 00:07:42.525 ] 00:07:42.525 } 00:07:42.525 ] 00:07:42.525 } 00:07:42.783 [2024-09-28 08:45:20.555335] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:42.783 [2024-09-28 08:45:20.555500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61085 ] 00:07:42.783 [2024-09-28 08:45:20.719791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.042 [2024-09-28 08:45:20.868473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.042 [2024-09-28 08:45:21.013910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.235  Copying: 60/60 [kB] (average 58 MBps) 00:07:44.235 00:07:44.235 08:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:44.235 08:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:44.235 08:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:44.235 08:45:22 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.235 { 00:07:44.235 "subsystems": [ 00:07:44.235 { 00:07:44.235 "subsystem": "bdev", 00:07:44.235 "config": [ 00:07:44.235 { 00:07:44.235 "params": { 00:07:44.235 "trtype": "pcie", 00:07:44.235 "traddr": "0000:00:10.0", 00:07:44.235 "name": "Nvme0" 00:07:44.235 }, 00:07:44.236 "method": "bdev_nvme_attach_controller" 00:07:44.236 }, 00:07:44.236 { 00:07:44.236 "method": "bdev_wait_for_examine" 00:07:44.236 } 00:07:44.236 ] 00:07:44.236 } 00:07:44.236 ] 00:07:44.236 } 00:07:44.236 [2024-09-28 08:45:22.141162] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:44.236 [2024-09-28 08:45:22.141586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61110 ] 00:07:44.494 [2024-09-28 08:45:22.305765] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.494 [2024-09-28 08:45:22.465507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.752 [2024-09-28 08:45:22.637578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.944  Copying: 60/60 [kB] (average 58 MBps) 00:07:45.944 00:07:45.944 08:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.944 08:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:45.944 08:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:45.944 08:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:45.944 08:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:45.944 08:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:45.944 08:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:45.944 08:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:45.944 08:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:45.944 08:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:45.944 08:45:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.944 { 00:07:45.944 "subsystems": [ 00:07:45.944 { 00:07:45.944 "subsystem": "bdev", 00:07:45.944 "config": [ 00:07:45.944 { 00:07:45.944 "params": { 00:07:45.944 "trtype": "pcie", 00:07:45.944 "traddr": "0000:00:10.0", 00:07:45.944 "name": "Nvme0" 00:07:45.944 }, 00:07:45.944 "method": "bdev_nvme_attach_controller" 00:07:45.944 }, 00:07:45.944 { 00:07:45.944 "method": "bdev_wait_for_examine" 00:07:45.944 } 00:07:45.944 ] 00:07:45.944 } 00:07:45.944 ] 00:07:45.944 } 00:07:45.944 [2024-09-28 08:45:23.928927] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:45.944 [2024-09-28 08:45:23.929120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61143 ] 00:07:46.202 [2024-09-28 08:45:24.101290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.471 [2024-09-28 08:45:24.265291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.471 [2024-09-28 08:45:24.423311] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.680  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:47.680 00:07:47.680 08:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:47.680 08:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:47.680 08:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:47.680 08:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:47.680 08:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:47.681 08:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:47.681 08:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:47.681 08:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.247 08:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:48.247 08:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:48.247 08:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:48.247 08:45:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.247 { 00:07:48.247 "subsystems": [ 00:07:48.247 { 00:07:48.247 "subsystem": "bdev", 00:07:48.247 "config": [ 00:07:48.247 { 00:07:48.247 "params": { 00:07:48.247 "trtype": "pcie", 00:07:48.247 "traddr": "0000:00:10.0", 00:07:48.247 "name": "Nvme0" 00:07:48.247 }, 00:07:48.247 "method": "bdev_nvme_attach_controller" 00:07:48.247 }, 00:07:48.247 { 00:07:48.247 "method": "bdev_wait_for_examine" 00:07:48.247 } 00:07:48.247 ] 00:07:48.247 } 00:07:48.247 ] 00:07:48.247 } 00:07:48.247 [2024-09-28 08:45:26.090653] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:48.247 [2024-09-28 08:45:26.091079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61174 ] 00:07:48.506 [2024-09-28 08:45:26.259712] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.506 [2024-09-28 08:45:26.446125] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.765 [2024-09-28 08:45:26.599707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.962  Copying: 56/56 [kB] (average 54 MBps) 00:07:49.962 00:07:49.962 08:45:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:49.962 08:45:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:49.962 08:45:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:49.962 08:45:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.962 { 00:07:49.962 "subsystems": [ 00:07:49.962 { 00:07:49.962 "subsystem": "bdev", 00:07:49.962 "config": [ 00:07:49.962 { 00:07:49.962 "params": { 00:07:49.962 "trtype": "pcie", 00:07:49.962 "traddr": "0000:00:10.0", 00:07:49.962 "name": "Nvme0" 00:07:49.962 }, 00:07:49.962 "method": "bdev_nvme_attach_controller" 00:07:49.962 }, 00:07:49.962 { 00:07:49.962 "method": "bdev_wait_for_examine" 00:07:49.962 } 00:07:49.962 ] 00:07:49.962 } 00:07:49.962 ] 00:07:49.962 } 00:07:49.962 [2024-09-28 08:45:27.882253] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:49.962 [2024-09-28 08:45:27.882438] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61200 ] 00:07:50.221 [2024-09-28 08:45:28.051587] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.221 [2024-09-28 08:45:28.206538] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.481 [2024-09-28 08:45:28.361087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.679  Copying: 56/56 [kB] (average 27 MBps) 00:07:51.679 00:07:51.679 08:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.679 08:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:51.679 08:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:51.679 08:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:51.679 08:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:51.679 08:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:51.679 08:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:51.679 08:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:51.679 08:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:51.679 08:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:51.679 08:45:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:51.679 { 00:07:51.679 "subsystems": [ 00:07:51.679 { 00:07:51.679 "subsystem": "bdev", 00:07:51.679 "config": [ 00:07:51.679 { 00:07:51.679 "params": { 00:07:51.679 "trtype": "pcie", 00:07:51.679 "traddr": "0000:00:10.0", 00:07:51.679 "name": "Nvme0" 00:07:51.679 }, 00:07:51.679 "method": "bdev_nvme_attach_controller" 00:07:51.679 }, 00:07:51.679 { 00:07:51.679 "method": "bdev_wait_for_examine" 00:07:51.679 } 00:07:51.679 ] 00:07:51.679 } 00:07:51.679 ] 00:07:51.679 } 00:07:51.679 [2024-09-28 08:45:29.454942] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:51.679 [2024-09-28 08:45:29.455386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61227 ] 00:07:51.679 [2024-09-28 08:45:29.624791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.939 [2024-09-28 08:45:29.781266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.939 [2024-09-28 08:45:29.932182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.136  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:53.136 00:07:53.395 08:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:53.395 08:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:53.395 08:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:53.395 08:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:53.395 08:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:53.395 08:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:53.395 08:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:53.654 08:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:53.654 08:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:53.654 08:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:53.654 08:45:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:53.912 { 00:07:53.912 "subsystems": [ 00:07:53.912 { 00:07:53.912 "subsystem": "bdev", 00:07:53.912 "config": [ 00:07:53.912 { 00:07:53.912 "params": { 00:07:53.912 "trtype": "pcie", 00:07:53.912 "traddr": "0000:00:10.0", 00:07:53.912 "name": "Nvme0" 00:07:53.912 }, 00:07:53.912 "method": "bdev_nvme_attach_controller" 00:07:53.912 }, 00:07:53.912 { 00:07:53.912 "method": "bdev_wait_for_examine" 00:07:53.912 } 00:07:53.912 ] 00:07:53.912 } 00:07:53.912 ] 00:07:53.912 } 00:07:53.912 [2024-09-28 08:45:31.727134] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:53.912 [2024-09-28 08:45:31.727315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61264 ] 00:07:53.912 [2024-09-28 08:45:31.900443] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.170 [2024-09-28 08:45:32.125408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.428 [2024-09-28 08:45:32.288452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.255  Copying: 56/56 [kB] (average 54 MBps) 00:07:55.255 00:07:55.515 08:45:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:55.515 08:45:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:55.515 08:45:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:55.515 08:45:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:55.515 { 00:07:55.515 "subsystems": [ 00:07:55.515 { 00:07:55.515 "subsystem": "bdev", 00:07:55.515 "config": [ 00:07:55.515 { 00:07:55.515 "params": { 00:07:55.515 "trtype": "pcie", 00:07:55.515 "traddr": "0000:00:10.0", 00:07:55.515 "name": "Nvme0" 00:07:55.515 }, 00:07:55.515 "method": "bdev_nvme_attach_controller" 00:07:55.515 }, 00:07:55.515 { 00:07:55.515 "method": "bdev_wait_for_examine" 00:07:55.515 } 00:07:55.515 ] 00:07:55.515 } 00:07:55.515 ] 00:07:55.515 } 00:07:55.515 [2024-09-28 08:45:33.357326] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:55.515 [2024-09-28 08:45:33.357718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61289 ] 00:07:55.775 [2024-09-28 08:45:33.513606] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.775 [2024-09-28 08:45:33.685506] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.036 [2024-09-28 08:45:33.846360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.422  Copying: 56/56 [kB] (average 54 MBps) 00:07:57.422 00:07:57.422 08:45:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.422 08:45:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:57.422 08:45:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:57.423 08:45:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:57.423 08:45:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:57.423 08:45:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:57.423 08:45:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:57.423 08:45:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:57.423 08:45:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:57.423 08:45:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:57.423 08:45:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:57.423 { 00:07:57.423 "subsystems": [ 00:07:57.423 { 00:07:57.423 "subsystem": "bdev", 00:07:57.423 "config": [ 00:07:57.423 { 00:07:57.423 "params": { 00:07:57.423 "trtype": "pcie", 00:07:57.423 "traddr": "0000:00:10.0", 00:07:57.423 "name": "Nvme0" 00:07:57.423 }, 00:07:57.423 "method": "bdev_nvme_attach_controller" 00:07:57.423 }, 00:07:57.423 { 00:07:57.423 "method": "bdev_wait_for_examine" 00:07:57.423 } 00:07:57.423 ] 00:07:57.423 } 00:07:57.423 ] 00:07:57.423 } 00:07:57.423 [2024-09-28 08:45:35.117130] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:57.423 [2024-09-28 08:45:35.117314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61317 ] 00:07:57.423 [2024-09-28 08:45:35.287277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.682 [2024-09-28 08:45:35.451897] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.682 [2024-09-28 08:45:35.603443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.879  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:58.879 00:07:58.879 08:45:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:58.879 08:45:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:58.879 08:45:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:58.879 08:45:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:58.879 08:45:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:58.879 08:45:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:58.879 08:45:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:58.879 08:45:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:59.138 08:45:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:59.139 08:45:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:59.139 08:45:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:59.139 08:45:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:59.139 { 00:07:59.139 "subsystems": [ 00:07:59.139 { 00:07:59.139 "subsystem": "bdev", 00:07:59.139 "config": [ 00:07:59.139 { 00:07:59.139 "params": { 00:07:59.139 "trtype": "pcie", 00:07:59.139 "traddr": "0000:00:10.0", 00:07:59.139 "name": "Nvme0" 00:07:59.139 }, 00:07:59.139 "method": "bdev_nvme_attach_controller" 00:07:59.139 }, 00:07:59.139 { 00:07:59.139 "method": "bdev_wait_for_examine" 00:07:59.139 } 00:07:59.139 ] 00:07:59.139 } 00:07:59.139 ] 00:07:59.139 } 00:07:59.139 [2024-09-28 08:45:37.101747] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:59.139 [2024-09-28 08:45:37.101927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61348 ] 00:07:59.399 [2024-09-28 08:45:37.255301] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.658 [2024-09-28 08:45:37.412767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.658 [2024-09-28 08:45:37.593598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.854  Copying: 48/48 [kB] (average 46 MBps) 00:08:00.854 00:08:00.854 08:45:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:00.854 08:45:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:00.854 08:45:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:00.854 08:45:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:00.854 { 00:08:00.854 "subsystems": [ 00:08:00.854 { 00:08:00.854 "subsystem": "bdev", 00:08:00.854 "config": [ 00:08:00.854 { 00:08:00.854 "params": { 00:08:00.854 "trtype": "pcie", 00:08:00.854 "traddr": "0000:00:10.0", 00:08:00.854 "name": "Nvme0" 00:08:00.854 }, 00:08:00.854 "method": "bdev_nvme_attach_controller" 00:08:00.854 }, 00:08:00.854 { 00:08:00.854 "method": "bdev_wait_for_examine" 00:08:00.854 } 00:08:00.854 ] 00:08:00.854 } 00:08:00.854 ] 00:08:00.854 } 00:08:00.854 [2024-09-28 08:45:38.839417] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:00.854 [2024-09-28 08:45:38.839809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61379 ] 00:08:01.112 [2024-09-28 08:45:38.999087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.371 [2024-09-28 08:45:39.155596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.372 [2024-09-28 08:45:39.298473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.570  Copying: 48/48 [kB] (average 46 MBps) 00:08:02.570 00:08:02.570 08:45:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.570 08:45:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:02.570 08:45:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:02.570 08:45:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:02.570 08:45:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:02.570 08:45:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:02.570 08:45:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:02.570 08:45:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:02.570 08:45:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:02.570 08:45:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:02.570 08:45:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:02.570 { 00:08:02.570 "subsystems": [ 00:08:02.570 { 00:08:02.570 "subsystem": "bdev", 00:08:02.570 "config": [ 00:08:02.570 { 00:08:02.570 "params": { 00:08:02.570 "trtype": "pcie", 00:08:02.570 "traddr": "0000:00:10.0", 00:08:02.570 "name": "Nvme0" 00:08:02.570 }, 00:08:02.570 "method": "bdev_nvme_attach_controller" 00:08:02.570 }, 00:08:02.570 { 00:08:02.570 "method": "bdev_wait_for_examine" 00:08:02.570 } 00:08:02.570 ] 00:08:02.570 } 00:08:02.570 ] 00:08:02.570 } 00:08:02.570 [2024-09-28 08:45:40.389054] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:02.570 [2024-09-28 08:45:40.389235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61401 ] 00:08:02.570 [2024-09-28 08:45:40.551553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.829 [2024-09-28 08:45:40.721956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.089 [2024-09-28 08:45:40.868226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.466  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:04.466 00:08:04.466 08:45:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:04.466 08:45:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:08:04.466 08:45:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:08:04.466 08:45:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:08:04.466 08:45:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:04.466 08:45:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:08:04.466 08:45:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.725 08:45:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:04.725 08:45:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:08:04.725 08:45:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:04.725 08:45:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:04.725 { 00:08:04.725 "subsystems": [ 00:08:04.725 { 00:08:04.725 "subsystem": "bdev", 00:08:04.725 "config": [ 00:08:04.725 { 00:08:04.725 "params": { 00:08:04.725 "trtype": "pcie", 00:08:04.725 "traddr": "0000:00:10.0", 00:08:04.725 "name": "Nvme0" 00:08:04.725 }, 00:08:04.725 "method": "bdev_nvme_attach_controller" 00:08:04.725 }, 00:08:04.725 { 00:08:04.725 "method": "bdev_wait_for_examine" 00:08:04.725 } 00:08:04.725 ] 00:08:04.725 } 00:08:04.725 ] 00:08:04.725 } 00:08:04.725 [2024-09-28 08:45:42.631141] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:04.725 [2024-09-28 08:45:42.631414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61432 ] 00:08:04.984 [2024-09-28 08:45:42.802939] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.242 [2024-09-28 08:45:43.011136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.242 [2024-09-28 08:45:43.162393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.434  Copying: 48/48 [kB] (average 46 MBps) 00:08:06.434 00:08:06.434 08:45:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:06.434 08:45:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:08:06.434 08:45:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:06.434 08:45:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:06.434 { 00:08:06.434 "subsystems": [ 00:08:06.434 { 00:08:06.434 "subsystem": "bdev", 00:08:06.434 "config": [ 00:08:06.434 { 00:08:06.434 "params": { 00:08:06.434 "trtype": "pcie", 00:08:06.434 "traddr": "0000:00:10.0", 00:08:06.434 "name": "Nvme0" 00:08:06.434 }, 00:08:06.434 "method": "bdev_nvme_attach_controller" 00:08:06.434 }, 00:08:06.434 { 00:08:06.434 "method": "bdev_wait_for_examine" 00:08:06.434 } 00:08:06.434 ] 00:08:06.434 } 00:08:06.434 ] 00:08:06.434 } 00:08:06.434 [2024-09-28 08:45:44.225449] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:06.434 [2024-09-28 08:45:44.225586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61463 ] 00:08:06.434 [2024-09-28 08:45:44.382406] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.692 [2024-09-28 08:45:44.533154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.949 [2024-09-28 08:45:44.696578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.913  Copying: 48/48 [kB] (average 46 MBps) 00:08:07.913 00:08:07.913 08:45:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.913 08:45:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:07.913 08:45:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:07.913 08:45:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:07.913 08:45:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:08:07.913 08:45:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:07.913 08:45:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:08:07.913 08:45:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:07.913 08:45:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:08:07.913 08:45:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:07.913 08:45:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:08.170 { 00:08:08.170 "subsystems": [ 00:08:08.170 { 00:08:08.170 "subsystem": "bdev", 00:08:08.170 "config": [ 00:08:08.170 { 00:08:08.170 "params": { 00:08:08.170 "trtype": "pcie", 00:08:08.170 "traddr": "0000:00:10.0", 00:08:08.170 "name": "Nvme0" 00:08:08.170 }, 00:08:08.170 "method": "bdev_nvme_attach_controller" 00:08:08.170 }, 00:08:08.170 { 00:08:08.170 "method": "bdev_wait_for_examine" 00:08:08.170 } 00:08:08.170 ] 00:08:08.170 } 00:08:08.170 ] 00:08:08.170 } 00:08:08.170 [2024-09-28 08:45:45.999669] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:08.170 [2024-09-28 08:45:45.999873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61492 ] 00:08:08.170 [2024-09-28 08:45:46.163530] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.428 [2024-09-28 08:45:46.321748] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.687 [2024-09-28 08:45:46.469644] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.625  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:09.625 00:08:09.625 ************************************ 00:08:09.625 END TEST dd_rw 00:08:09.625 ************************************ 00:08:09.625 00:08:09.625 real 0m33.291s 00:08:09.625 user 0m28.229s 00:08:09.625 sys 0m13.968s 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:09.625 ************************************ 00:08:09.625 START TEST dd_rw_offset 00:08:09.625 ************************************ 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=bqt8ft8ojptmh240x5a3le1q7zy5rw0cs2klxtykhyz3jmt2u5ecx5ce19pjy9m2lzkw2f3houudpm213g9t35lijbpor9yy0vohpv6txnkzk7u670t31id2l9wjekg144l1nl3uerjsibzdc0si2i12787afnw7el6pjtgv0kzrvv30e389cy1ni0iuq1reoei3kdg2vxz2g8wpjcfkypph5oz3ub3zmzg9qonsrzytbhnuo66d9rtp5bwu7oh48mb4h9ufnw8s6kc190bqade0sh8y3y6nngflr820myyhb764wtl4x0ypjdaejgletcgs0jk8fi97t1rly7iejhn1s25rabq2h80hlnj9qld5nhdfxt06583krt6hq30vgwgzp1ic8e1nb91ov87iv9eumj5xpn3osjkr1fbzilh34xrxemmpu328vbd6fpcvlxy0skazvojtsmfdvyb9i3kzyrn55q8qg4o1hy3u7qnv79gfsir9ivl825oiawl3ano8w8ko5vbhqeg2y490qfiimnuulk13pfjfkcl6bznu8mk2c0fi0un2oi4cib7x4pyf3i9ksyp0uf2drxtfn420f93j80ta5k5t9ybudo1memkv0x0r5ekg1rsgel6436q5yo35u1ua189lyps7pcjupyzz8ld9613xfd18309p6px9731e8chl5khdimsbp75knakbw19d5h1i2leqjmxtab821lu38kiwtn1wtzncryabm0e93ijazso85z970cuplkj1vnlbocyhz7hg6e4mop90dvo5c8wkjgnununj86zxkoc7xpyh2jyc9rc38dll6s78ndc2uholtrrietgyo6h6uckru4gv7ye754awvu5m3wcjnheyjtghdm6j55nr9i4m4485b4w19zal5c0c6kmh4g49kffqi8m0u3ytk04gvzdwp7kvpz3dr9ig0nczh3deu33acnpd0982ztd63e69l9038h3zeifasx203pep2pz7z44mj55pcspcdv5kvhf3z23z9lgxvjul36bgk475iqanu6pd4w2uoq3rrw5287rzc8tm12jydnz035ftirxne08k90i9qlx3k8guh814ebxtxm5f0fy4me9ncas1ernfyxq3m7li8plnce9oslz02z33iou3518s0csqdjosk2epfk59hevt2o52atlrei1olwtijqjsi448fi318pxrsyl4w0pnpkibmibgky660v9q5jvbmeqv836j28ofqcnah8eh5qdnbeoxn7rvc4ufkyi6y3my9qtjlqv9q4wgf147nlhiyhy1kjp2m69cjso35ey3vwopoflpmm7wjmm8lu5pv1gcvrtfw22b44u122ahirw0jrlohe9gs239ptmogdrn9nha3rtxi3zpbrl44qmbghwtxz1jkhm4r9h737r34as205u1bqpqpwff12sygz3gl4w3nhb23zxmgiry5dl4qndlmpj3c1md097j5vtveg19mqodqbziypcpjp9lokx0j7x7e1ty2tin6opjeyxoectkstzj60jfia5dk3eraf95x6spk52n5iicuw15n69ndpxee4auwsqo1q6opafoekwmdp4l5ilav58qsydrdwsxsek1vsnpm0r5y9zz7dtekuy7phqbh1fz7ea9p7h5hdt935orn7813wfstt0eql1zrbjyuf02tzbebxyhjh6bdcy6ijasy0gqr5rr70njmbresey09on6m3on1m2mzg703lmmjot0pu17zh24crp8uz1panya64sn7s6e4qw1jhv6lpobbnrdmz7df5eul2p6a75yfjynd62ke112fqmx3n0dik4iu7xt40qom4utlyfzi29rokowrtq6wrakj50p73upk453705bqmcmvu6yqolwzow4zpq7p5x7vm92cpmya0w9fzfmvzdv9qvv6rgcwrhras04y9la19fbo6022nkaeabgbdsz7qi36mnqfiaw2smqhjj88x4v1k3lef0q7lfn0a1w98npxxsz7opl82sqs4axx6kw68tbipfpwei5ahml1ujkckmb2mkjkua86th39qu4pwi3tf0rzvaidjg5possy60dedb1ocnnvmqyahqeqn2awh0hsx5evpou0iim6ncaoa9fb0z1petldtdaqla1yfl7g7u6hft2lney0i5cxf44w2nr5n8ne2d47zydxv1ft625sqxn2vvsblaopr42llbrv41hdvo76jzlkeqcjf3p96kvoqi4850220jui8rjkorrha5d5pkzr0zbeu155u8jcqvh1p7le05jnxjiaovmd1nlthjnh32bk7mjwo3yxkgiputzdg17c0rxkp97ya34rv7tj6p291yuvp7x1ny1f9o86bc3tbbljw1ja342u5ojmp930xd4mi1spsjky3yudwu0tzcsr314dzobmwukwg007jo5jatviq5dziv5t633y6u24wfcbg5ccj0gywlmtqeih9y8maw2g2t1ux5aigocw3v6dqdetwtqdb2ew3816wrff06hum3jncbwi11rhibo0z7w7eed2vlgq2zjewabpi7ub4yynx1hq7205ui2fywamgyibh4njij7zu63dg23v2xzf034l0peo5zx4tvhrzblphdam88gsqru22wrabnz5ue0q7jqhqxq3dby0d7n706j333vjpmkpwyec1tkk11l7d32phyt2kgp12o2pqv74pbocyz53ncrg8pf98fepvo78xdgoq6trg9yj4moscn28okuiyghrl0fukf3ywfnl3xuy9yxsl4nckq48ydu7b5560s6x41nijl0fxt1a2dvylqf4uo1k9yzi2exm76mvncb3n8gwcfi3fm919bwxq0y0k26gt3mpc91aeo46rqhvwutsv77x2ho30ytykpzyhv96gzrfhxry3rzria5i2eaxk2bn4ekr30z37k53fbtthvx3ul1dt0ws3eaaszkemsfxnq18d2dri8emd49td63x4ht9u61e361lct33nx30evsma6n7db14vef0vtb7nld0033umsfdgxuqb66oluwkzl8t31q6c3bex40c06fgwibjq5rz323ksq0lep7k5fcbh2a72j3hz92z99l7ppn47yjkaqafmhwr5fcv4l2rfw30a6arl3bishs8068m3qvbzclnto2q3b4w9otofanyha646cxe5gkbwy8lxpra3kzy8njz9gnz1nwumrqlmlx72t98h1k1muweg0atdro2m2q8ouk4q3dspixd18onit5l03jckxfqkz07bg10r2j66art0lqrx23bgo9wqpmxffzspp401wiz3cojnrc6ejcmwpkpasa6o5vqbl2ysregv5h36rgz0ej3e5bby27knbf8msz99oznhey1wu4kp99dlagjlu9pck0jvm9qyppeujse3k5238rqhwo8sc7ldp2hwvnmwoe3zodv1jwz5enrokjlnkktgjuy0g7ehp40bzg2g14w886f3oe5ps2r0mxeo1rwoj2bak69aju71vx1iyclev6n1vfka1e6v3lw9gqdpyg99g5p2k2zo04ku8y4u7anoq8gvlb30nr241zrthgszkepro54fwbrad2n72fg8zdeuw17w54e4hucyn7kudqw7wftjpcqmo2v13okzc6u1154vzpegaqdc7tlakemhnm6qv5donbon0ypazz0dqywn80nmtcc34cu3u8l860vk9qbphqecs7z8sfm25qk1tpp2hpd1pl3ndqtin2xre26lhen2gstxq18ckqratn4gzfr9538enno9e2zrb6x3p0soqg85kjzs4jtht6k3affpak78ygtao0l324p2lsxx7u9pb1d7s3xrahizbicrwoqdon86v0oablw60ttlggthgbldmv1hbzh9b6u4peolfkmsadgr4caxion8vokfk8sq65a3mxl5hvp6qdxs2tqzgpizyofdmaizq6ayq3utup347vd9ywkj6u4x8d0fdfww1ayz6ws922yy01rvhxdso91lfwkogu23oz7ijvjk51cyapqjd53l5mgi7olcdz 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:09.625 08:45:47 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:09.625 { 00:08:09.625 "subsystems": [ 00:08:09.625 { 00:08:09.625 "subsystem": "bdev", 00:08:09.625 "config": [ 00:08:09.625 { 00:08:09.625 "params": { 00:08:09.625 "trtype": "pcie", 00:08:09.625 "traddr": "0000:00:10.0", 00:08:09.625 "name": "Nvme0" 00:08:09.625 }, 00:08:09.625 "method": "bdev_nvme_attach_controller" 00:08:09.625 }, 00:08:09.625 { 00:08:09.625 "method": "bdev_wait_for_examine" 00:08:09.625 } 00:08:09.625 ] 00:08:09.625 } 00:08:09.625 ] 00:08:09.625 } 00:08:09.884 [2024-09-28 08:45:47.641420] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:09.885 [2024-09-28 08:45:47.641621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61533 ] 00:08:09.885 [2024-09-28 08:45:47.806644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.143 [2024-09-28 08:45:47.960373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.143 [2024-09-28 08:45:48.124768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.339  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:11.339 00:08:11.339 08:45:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:08:11.339 08:45:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:11.339 08:45:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:08:11.339 08:45:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:11.598 { 00:08:11.598 "subsystems": [ 00:08:11.598 { 00:08:11.598 "subsystem": "bdev", 00:08:11.598 "config": [ 00:08:11.598 { 00:08:11.598 "params": { 00:08:11.598 "trtype": "pcie", 00:08:11.598 "traddr": "0000:00:10.0", 00:08:11.599 "name": "Nvme0" 00:08:11.599 }, 00:08:11.599 "method": "bdev_nvme_attach_controller" 00:08:11.599 }, 00:08:11.599 { 00:08:11.599 "method": "bdev_wait_for_examine" 00:08:11.599 } 00:08:11.599 ] 00:08:11.599 } 00:08:11.599 ] 00:08:11.599 } 00:08:11.599 [2024-09-28 08:45:49.435263] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:11.599 [2024-09-28 08:45:49.435441] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61564 ] 00:08:11.858 [2024-09-28 08:45:49.604918] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.858 [2024-09-28 08:45:49.773607] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.117 [2024-09-28 08:45:49.932224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.055  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:13.055 00:08:13.055 08:45:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:13.056 08:45:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ bqt8ft8ojptmh240x5a3le1q7zy5rw0cs2klxtykhyz3jmt2u5ecx5ce19pjy9m2lzkw2f3houudpm213g9t35lijbpor9yy0vohpv6txnkzk7u670t31id2l9wjekg144l1nl3uerjsibzdc0si2i12787afnw7el6pjtgv0kzrvv30e389cy1ni0iuq1reoei3kdg2vxz2g8wpjcfkypph5oz3ub3zmzg9qonsrzytbhnuo66d9rtp5bwu7oh48mb4h9ufnw8s6kc190bqade0sh8y3y6nngflr820myyhb764wtl4x0ypjdaejgletcgs0jk8fi97t1rly7iejhn1s25rabq2h80hlnj9qld5nhdfxt06583krt6hq30vgwgzp1ic8e1nb91ov87iv9eumj5xpn3osjkr1fbzilh34xrxemmpu328vbd6fpcvlxy0skazvojtsmfdvyb9i3kzyrn55q8qg4o1hy3u7qnv79gfsir9ivl825oiawl3ano8w8ko5vbhqeg2y490qfiimnuulk13pfjfkcl6bznu8mk2c0fi0un2oi4cib7x4pyf3i9ksyp0uf2drxtfn420f93j80ta5k5t9ybudo1memkv0x0r5ekg1rsgel6436q5yo35u1ua189lyps7pcjupyzz8ld9613xfd18309p6px9731e8chl5khdimsbp75knakbw19d5h1i2leqjmxtab821lu38kiwtn1wtzncryabm0e93ijazso85z970cuplkj1vnlbocyhz7hg6e4mop90dvo5c8wkjgnununj86zxkoc7xpyh2jyc9rc38dll6s78ndc2uholtrrietgyo6h6uckru4gv7ye754awvu5m3wcjnheyjtghdm6j55nr9i4m4485b4w19zal5c0c6kmh4g49kffqi8m0u3ytk04gvzdwp7kvpz3dr9ig0nczh3deu33acnpd0982ztd63e69l9038h3zeifasx203pep2pz7z44mj55pcspcdv5kvhf3z23z9lgxvjul36bgk475iqanu6pd4w2uoq3rrw5287rzc8tm12jydnz035ftirxne08k90i9qlx3k8guh814ebxtxm5f0fy4me9ncas1ernfyxq3m7li8plnce9oslz02z33iou3518s0csqdjosk2epfk59hevt2o52atlrei1olwtijqjsi448fi318pxrsyl4w0pnpkibmibgky660v9q5jvbmeqv836j28ofqcnah8eh5qdnbeoxn7rvc4ufkyi6y3my9qtjlqv9q4wgf147nlhiyhy1kjp2m69cjso35ey3vwopoflpmm7wjmm8lu5pv1gcvrtfw22b44u122ahirw0jrlohe9gs239ptmogdrn9nha3rtxi3zpbrl44qmbghwtxz1jkhm4r9h737r34as205u1bqpqpwff12sygz3gl4w3nhb23zxmgiry5dl4qndlmpj3c1md097j5vtveg19mqodqbziypcpjp9lokx0j7x7e1ty2tin6opjeyxoectkstzj60jfia5dk3eraf95x6spk52n5iicuw15n69ndpxee4auwsqo1q6opafoekwmdp4l5ilav58qsydrdwsxsek1vsnpm0r5y9zz7dtekuy7phqbh1fz7ea9p7h5hdt935orn7813wfstt0eql1zrbjyuf02tzbebxyhjh6bdcy6ijasy0gqr5rr70njmbresey09on6m3on1m2mzg703lmmjot0pu17zh24crp8uz1panya64sn7s6e4qw1jhv6lpobbnrdmz7df5eul2p6a75yfjynd62ke112fqmx3n0dik4iu7xt40qom4utlyfzi29rokowrtq6wrakj50p73upk453705bqmcmvu6yqolwzow4zpq7p5x7vm92cpmya0w9fzfmvzdv9qvv6rgcwrhras04y9la19fbo6022nkaeabgbdsz7qi36mnqfiaw2smqhjj88x4v1k3lef0q7lfn0a1w98npxxsz7opl82sqs4axx6kw68tbipfpwei5ahml1ujkckmb2mkjkua86th39qu4pwi3tf0rzvaidjg5possy60dedb1ocnnvmqyahqeqn2awh0hsx5evpou0iim6ncaoa9fb0z1petldtdaqla1yfl7g7u6hft2lney0i5cxf44w2nr5n8ne2d47zydxv1ft625sqxn2vvsblaopr42llbrv41hdvo76jzlkeqcjf3p96kvoqi4850220jui8rjkorrha5d5pkzr0zbeu155u8jcqvh1p7le05jnxjiaovmd1nlthjnh32bk7mjwo3yxkgiputzdg17c0rxkp97ya34rv7tj6p291yuvp7x1ny1f9o86bc3tbbljw1ja342u5ojmp930xd4mi1spsjky3yudwu0tzcsr314dzobmwukwg007jo5jatviq5dziv5t633y6u24wfcbg5ccj0gywlmtqeih9y8maw2g2t1ux5aigocw3v6dqdetwtqdb2ew3816wrff06hum3jncbwi11rhibo0z7w7eed2vlgq2zjewabpi7ub4yynx1hq7205ui2fywamgyibh4njij7zu63dg23v2xzf034l0peo5zx4tvhrzblphdam88gsqru22wrabnz5ue0q7jqhqxq3dby0d7n706j333vjpmkpwyec1tkk11l7d32phyt2kgp12o2pqv74pbocyz53ncrg8pf98fepvo78xdgoq6trg9yj4moscn28okuiyghrl0fukf3ywfnl3xuy9yxsl4nckq48ydu7b5560s6x41nijl0fxt1a2dvylqf4uo1k9yzi2exm76mvncb3n8gwcfi3fm919bwxq0y0k26gt3mpc91aeo46rqhvwutsv77x2ho30ytykpzyhv96gzrfhxry3rzria5i2eaxk2bn4ekr30z37k53fbtthvx3ul1dt0ws3eaaszkemsfxnq18d2dri8emd49td63x4ht9u61e361lct33nx30evsma6n7db14vef0vtb7nld0033umsfdgxuqb66oluwkzl8t31q6c3bex40c06fgwibjq5rz323ksq0lep7k5fcbh2a72j3hz92z99l7ppn47yjkaqafmhwr5fcv4l2rfw30a6arl3bishs8068m3qvbzclnto2q3b4w9otofanyha646cxe5gkbwy8lxpra3kzy8njz9gnz1nwumrqlmlx72t98h1k1muweg0atdro2m2q8ouk4q3dspixd18onit5l03jckxfqkz07bg10r2j66art0lqrx23bgo9wqpmxffzspp401wiz3cojnrc6ejcmwpkpasa6o5vqbl2ysregv5h36rgz0ej3e5bby27knbf8msz99oznhey1wu4kp99dlagjlu9pck0jvm9qyppeujse3k5238rqhwo8sc7ldp2hwvnmwoe3zodv1jwz5enrokjlnkktgjuy0g7ehp40bzg2g14w886f3oe5ps2r0mxeo1rwoj2bak69aju71vx1iyclev6n1vfka1e6v3lw9gqdpyg99g5p2k2zo04ku8y4u7anoq8gvlb30nr241zrthgszkepro54fwbrad2n72fg8zdeuw17w54e4hucyn7kudqw7wftjpcqmo2v13okzc6u1154vzpegaqdc7tlakemhnm6qv5donbon0ypazz0dqywn80nmtcc34cu3u8l860vk9qbphqecs7z8sfm25qk1tpp2hpd1pl3ndqtin2xre26lhen2gstxq18ckqratn4gzfr9538enno9e2zrb6x3p0soqg85kjzs4jtht6k3affpak78ygtao0l324p2lsxx7u9pb1d7s3xrahizbicrwoqdon86v0oablw60ttlggthgbldmv1hbzh9b6u4peolfkmsadgr4caxion8vokfk8sq65a3mxl5hvp6qdxs2tqzgpizyofdmaizq6ayq3utup347vd9ywkj6u4x8d0fdfww1ayz6ws922yy01rvhxdso91lfwkogu23oz7ijvjk51cyapqjd53l5mgi7olcdz == \b\q\t\8\f\t\8\o\j\p\t\m\h\2\4\0\x\5\a\3\l\e\1\q\7\z\y\5\r\w\0\c\s\2\k\l\x\t\y\k\h\y\z\3\j\m\t\2\u\5\e\c\x\5\c\e\1\9\p\j\y\9\m\2\l\z\k\w\2\f\3\h\o\u\u\d\p\m\2\1\3\g\9\t\3\5\l\i\j\b\p\o\r\9\y\y\0\v\o\h\p\v\6\t\x\n\k\z\k\7\u\6\7\0\t\3\1\i\d\2\l\9\w\j\e\k\g\1\4\4\l\1\n\l\3\u\e\r\j\s\i\b\z\d\c\0\s\i\2\i\1\2\7\8\7\a\f\n\w\7\e\l\6\p\j\t\g\v\0\k\z\r\v\v\3\0\e\3\8\9\c\y\1\n\i\0\i\u\q\1\r\e\o\e\i\3\k\d\g\2\v\x\z\2\g\8\w\p\j\c\f\k\y\p\p\h\5\o\z\3\u\b\3\z\m\z\g\9\q\o\n\s\r\z\y\t\b\h\n\u\o\6\6\d\9\r\t\p\5\b\w\u\7\o\h\4\8\m\b\4\h\9\u\f\n\w\8\s\6\k\c\1\9\0\b\q\a\d\e\0\s\h\8\y\3\y\6\n\n\g\f\l\r\8\2\0\m\y\y\h\b\7\6\4\w\t\l\4\x\0\y\p\j\d\a\e\j\g\l\e\t\c\g\s\0\j\k\8\f\i\9\7\t\1\r\l\y\7\i\e\j\h\n\1\s\2\5\r\a\b\q\2\h\8\0\h\l\n\j\9\q\l\d\5\n\h\d\f\x\t\0\6\5\8\3\k\r\t\6\h\q\3\0\v\g\w\g\z\p\1\i\c\8\e\1\n\b\9\1\o\v\8\7\i\v\9\e\u\m\j\5\x\p\n\3\o\s\j\k\r\1\f\b\z\i\l\h\3\4\x\r\x\e\m\m\p\u\3\2\8\v\b\d\6\f\p\c\v\l\x\y\0\s\k\a\z\v\o\j\t\s\m\f\d\v\y\b\9\i\3\k\z\y\r\n\5\5\q\8\q\g\4\o\1\h\y\3\u\7\q\n\v\7\9\g\f\s\i\r\9\i\v\l\8\2\5\o\i\a\w\l\3\a\n\o\8\w\8\k\o\5\v\b\h\q\e\g\2\y\4\9\0\q\f\i\i\m\n\u\u\l\k\1\3\p\f\j\f\k\c\l\6\b\z\n\u\8\m\k\2\c\0\f\i\0\u\n\2\o\i\4\c\i\b\7\x\4\p\y\f\3\i\9\k\s\y\p\0\u\f\2\d\r\x\t\f\n\4\2\0\f\9\3\j\8\0\t\a\5\k\5\t\9\y\b\u\d\o\1\m\e\m\k\v\0\x\0\r\5\e\k\g\1\r\s\g\e\l\6\4\3\6\q\5\y\o\3\5\u\1\u\a\1\8\9\l\y\p\s\7\p\c\j\u\p\y\z\z\8\l\d\9\6\1\3\x\f\d\1\8\3\0\9\p\6\p\x\9\7\3\1\e\8\c\h\l\5\k\h\d\i\m\s\b\p\7\5\k\n\a\k\b\w\1\9\d\5\h\1\i\2\l\e\q\j\m\x\t\a\b\8\2\1\l\u\3\8\k\i\w\t\n\1\w\t\z\n\c\r\y\a\b\m\0\e\9\3\i\j\a\z\s\o\8\5\z\9\7\0\c\u\p\l\k\j\1\v\n\l\b\o\c\y\h\z\7\h\g\6\e\4\m\o\p\9\0\d\v\o\5\c\8\w\k\j\g\n\u\n\u\n\j\8\6\z\x\k\o\c\7\x\p\y\h\2\j\y\c\9\r\c\3\8\d\l\l\6\s\7\8\n\d\c\2\u\h\o\l\t\r\r\i\e\t\g\y\o\6\h\6\u\c\k\r\u\4\g\v\7\y\e\7\5\4\a\w\v\u\5\m\3\w\c\j\n\h\e\y\j\t\g\h\d\m\6\j\5\5\n\r\9\i\4\m\4\4\8\5\b\4\w\1\9\z\a\l\5\c\0\c\6\k\m\h\4\g\4\9\k\f\f\q\i\8\m\0\u\3\y\t\k\0\4\g\v\z\d\w\p\7\k\v\p\z\3\d\r\9\i\g\0\n\c\z\h\3\d\e\u\3\3\a\c\n\p\d\0\9\8\2\z\t\d\6\3\e\6\9\l\9\0\3\8\h\3\z\e\i\f\a\s\x\2\0\3\p\e\p\2\p\z\7\z\4\4\m\j\5\5\p\c\s\p\c\d\v\5\k\v\h\f\3\z\2\3\z\9\l\g\x\v\j\u\l\3\6\b\g\k\4\7\5\i\q\a\n\u\6\p\d\4\w\2\u\o\q\3\r\r\w\5\2\8\7\r\z\c\8\t\m\1\2\j\y\d\n\z\0\3\5\f\t\i\r\x\n\e\0\8\k\9\0\i\9\q\l\x\3\k\8\g\u\h\8\1\4\e\b\x\t\x\m\5\f\0\f\y\4\m\e\9\n\c\a\s\1\e\r\n\f\y\x\q\3\m\7\l\i\8\p\l\n\c\e\9\o\s\l\z\0\2\z\3\3\i\o\u\3\5\1\8\s\0\c\s\q\d\j\o\s\k\2\e\p\f\k\5\9\h\e\v\t\2\o\5\2\a\t\l\r\e\i\1\o\l\w\t\i\j\q\j\s\i\4\4\8\f\i\3\1\8\p\x\r\s\y\l\4\w\0\p\n\p\k\i\b\m\i\b\g\k\y\6\6\0\v\9\q\5\j\v\b\m\e\q\v\8\3\6\j\2\8\o\f\q\c\n\a\h\8\e\h\5\q\d\n\b\e\o\x\n\7\r\v\c\4\u\f\k\y\i\6\y\3\m\y\9\q\t\j\l\q\v\9\q\4\w\g\f\1\4\7\n\l\h\i\y\h\y\1\k\j\p\2\m\6\9\c\j\s\o\3\5\e\y\3\v\w\o\p\o\f\l\p\m\m\7\w\j\m\m\8\l\u\5\p\v\1\g\c\v\r\t\f\w\2\2\b\4\4\u\1\2\2\a\h\i\r\w\0\j\r\l\o\h\e\9\g\s\2\3\9\p\t\m\o\g\d\r\n\9\n\h\a\3\r\t\x\i\3\z\p\b\r\l\4\4\q\m\b\g\h\w\t\x\z\1\j\k\h\m\4\r\9\h\7\3\7\r\3\4\a\s\2\0\5\u\1\b\q\p\q\p\w\f\f\1\2\s\y\g\z\3\g\l\4\w\3\n\h\b\2\3\z\x\m\g\i\r\y\5\d\l\4\q\n\d\l\m\p\j\3\c\1\m\d\0\9\7\j\5\v\t\v\e\g\1\9\m\q\o\d\q\b\z\i\y\p\c\p\j\p\9\l\o\k\x\0\j\7\x\7\e\1\t\y\2\t\i\n\6\o\p\j\e\y\x\o\e\c\t\k\s\t\z\j\6\0\j\f\i\a\5\d\k\3\e\r\a\f\9\5\x\6\s\p\k\5\2\n\5\i\i\c\u\w\1\5\n\6\9\n\d\p\x\e\e\4\a\u\w\s\q\o\1\q\6\o\p\a\f\o\e\k\w\m\d\p\4\l\5\i\l\a\v\5\8\q\s\y\d\r\d\w\s\x\s\e\k\1\v\s\n\p\m\0\r\5\y\9\z\z\7\d\t\e\k\u\y\7\p\h\q\b\h\1\f\z\7\e\a\9\p\7\h\5\h\d\t\9\3\5\o\r\n\7\8\1\3\w\f\s\t\t\0\e\q\l\1\z\r\b\j\y\u\f\0\2\t\z\b\e\b\x\y\h\j\h\6\b\d\c\y\6\i\j\a\s\y\0\g\q\r\5\r\r\7\0\n\j\m\b\r\e\s\e\y\0\9\o\n\6\m\3\o\n\1\m\2\m\z\g\7\0\3\l\m\m\j\o\t\0\p\u\1\7\z\h\2\4\c\r\p\8\u\z\1\p\a\n\y\a\6\4\s\n\7\s\6\e\4\q\w\1\j\h\v\6\l\p\o\b\b\n\r\d\m\z\7\d\f\5\e\u\l\2\p\6\a\7\5\y\f\j\y\n\d\6\2\k\e\1\1\2\f\q\m\x\3\n\0\d\i\k\4\i\u\7\x\t\4\0\q\o\m\4\u\t\l\y\f\z\i\2\9\r\o\k\o\w\r\t\q\6\w\r\a\k\j\5\0\p\7\3\u\p\k\4\5\3\7\0\5\b\q\m\c\m\v\u\6\y\q\o\l\w\z\o\w\4\z\p\q\7\p\5\x\7\v\m\9\2\c\p\m\y\a\0\w\9\f\z\f\m\v\z\d\v\9\q\v\v\6\r\g\c\w\r\h\r\a\s\0\4\y\9\l\a\1\9\f\b\o\6\0\2\2\n\k\a\e\a\b\g\b\d\s\z\7\q\i\3\6\m\n\q\f\i\a\w\2\s\m\q\h\j\j\8\8\x\4\v\1\k\3\l\e\f\0\q\7\l\f\n\0\a\1\w\9\8\n\p\x\x\s\z\7\o\p\l\8\2\s\q\s\4\a\x\x\6\k\w\6\8\t\b\i\p\f\p\w\e\i\5\a\h\m\l\1\u\j\k\c\k\m\b\2\m\k\j\k\u\a\8\6\t\h\3\9\q\u\4\p\w\i\3\t\f\0\r\z\v\a\i\d\j\g\5\p\o\s\s\y\6\0\d\e\d\b\1\o\c\n\n\v\m\q\y\a\h\q\e\q\n\2\a\w\h\0\h\s\x\5\e\v\p\o\u\0\i\i\m\6\n\c\a\o\a\9\f\b\0\z\1\p\e\t\l\d\t\d\a\q\l\a\1\y\f\l\7\g\7\u\6\h\f\t\2\l\n\e\y\0\i\5\c\x\f\4\4\w\2\n\r\5\n\8\n\e\2\d\4\7\z\y\d\x\v\1\f\t\6\2\5\s\q\x\n\2\v\v\s\b\l\a\o\p\r\4\2\l\l\b\r\v\4\1\h\d\v\o\7\6\j\z\l\k\e\q\c\j\f\3\p\9\6\k\v\o\q\i\4\8\5\0\2\2\0\j\u\i\8\r\j\k\o\r\r\h\a\5\d\5\p\k\z\r\0\z\b\e\u\1\5\5\u\8\j\c\q\v\h\1\p\7\l\e\0\5\j\n\x\j\i\a\o\v\m\d\1\n\l\t\h\j\n\h\3\2\b\k\7\m\j\w\o\3\y\x\k\g\i\p\u\t\z\d\g\1\7\c\0\r\x\k\p\9\7\y\a\3\4\r\v\7\t\j\6\p\2\9\1\y\u\v\p\7\x\1\n\y\1\f\9\o\8\6\b\c\3\t\b\b\l\j\w\1\j\a\3\4\2\u\5\o\j\m\p\9\3\0\x\d\4\m\i\1\s\p\s\j\k\y\3\y\u\d\w\u\0\t\z\c\s\r\3\1\4\d\z\o\b\m\w\u\k\w\g\0\0\7\j\o\5\j\a\t\v\i\q\5\d\z\i\v\5\t\6\3\3\y\6\u\2\4\w\f\c\b\g\5\c\c\j\0\g\y\w\l\m\t\q\e\i\h\9\y\8\m\a\w\2\g\2\t\1\u\x\5\a\i\g\o\c\w\3\v\6\d\q\d\e\t\w\t\q\d\b\2\e\w\3\8\1\6\w\r\f\f\0\6\h\u\m\3\j\n\c\b\w\i\1\1\r\h\i\b\o\0\z\7\w\7\e\e\d\2\v\l\g\q\2\z\j\e\w\a\b\p\i\7\u\b\4\y\y\n\x\1\h\q\7\2\0\5\u\i\2\f\y\w\a\m\g\y\i\b\h\4\n\j\i\j\7\z\u\6\3\d\g\2\3\v\2\x\z\f\0\3\4\l\0\p\e\o\5\z\x\4\t\v\h\r\z\b\l\p\h\d\a\m\8\8\g\s\q\r\u\2\2\w\r\a\b\n\z\5\u\e\0\q\7\j\q\h\q\x\q\3\d\b\y\0\d\7\n\7\0\6\j\3\3\3\v\j\p\m\k\p\w\y\e\c\1\t\k\k\1\1\l\7\d\3\2\p\h\y\t\2\k\g\p\1\2\o\2\p\q\v\7\4\p\b\o\c\y\z\5\3\n\c\r\g\8\p\f\9\8\f\e\p\v\o\7\8\x\d\g\o\q\6\t\r\g\9\y\j\4\m\o\s\c\n\2\8\o\k\u\i\y\g\h\r\l\0\f\u\k\f\3\y\w\f\n\l\3\x\u\y\9\y\x\s\l\4\n\c\k\q\4\8\y\d\u\7\b\5\5\6\0\s\6\x\4\1\n\i\j\l\0\f\x\t\1\a\2\d\v\y\l\q\f\4\u\o\1\k\9\y\z\i\2\e\x\m\7\6\m\v\n\c\b\3\n\8\g\w\c\f\i\3\f\m\9\1\9\b\w\x\q\0\y\0\k\2\6\g\t\3\m\p\c\9\1\a\e\o\4\6\r\q\h\v\w\u\t\s\v\7\7\x\2\h\o\3\0\y\t\y\k\p\z\y\h\v\9\6\g\z\r\f\h\x\r\y\3\r\z\r\i\a\5\i\2\e\a\x\k\2\b\n\4\e\k\r\3\0\z\3\7\k\5\3\f\b\t\t\h\v\x\3\u\l\1\d\t\0\w\s\3\e\a\a\s\z\k\e\m\s\f\x\n\q\1\8\d\2\d\r\i\8\e\m\d\4\9\t\d\6\3\x\4\h\t\9\u\6\1\e\3\6\1\l\c\t\3\3\n\x\3\0\e\v\s\m\a\6\n\7\d\b\1\4\v\e\f\0\v\t\b\7\n\l\d\0\0\3\3\u\m\s\f\d\g\x\u\q\b\6\6\o\l\u\w\k\z\l\8\t\3\1\q\6\c\3\b\e\x\4\0\c\0\6\f\g\w\i\b\j\q\5\r\z\3\2\3\k\s\q\0\l\e\p\7\k\5\f\c\b\h\2\a\7\2\j\3\h\z\9\2\z\9\9\l\7\p\p\n\4\7\y\j\k\a\q\a\f\m\h\w\r\5\f\c\v\4\l\2\r\f\w\3\0\a\6\a\r\l\3\b\i\s\h\s\8\0\6\8\m\3\q\v\b\z\c\l\n\t\o\2\q\3\b\4\w\9\o\t\o\f\a\n\y\h\a\6\4\6\c\x\e\5\g\k\b\w\y\8\l\x\p\r\a\3\k\z\y\8\n\j\z\9\g\n\z\1\n\w\u\m\r\q\l\m\l\x\7\2\t\9\8\h\1\k\1\m\u\w\e\g\0\a\t\d\r\o\2\m\2\q\8\o\u\k\4\q\3\d\s\p\i\x\d\1\8\o\n\i\t\5\l\0\3\j\c\k\x\f\q\k\z\0\7\b\g\1\0\r\2\j\6\6\a\r\t\0\l\q\r\x\2\3\b\g\o\9\w\q\p\m\x\f\f\z\s\p\p\4\0\1\w\i\z\3\c\o\j\n\r\c\6\e\j\c\m\w\p\k\p\a\s\a\6\o\5\v\q\b\l\2\y\s\r\e\g\v\5\h\3\6\r\g\z\0\e\j\3\e\5\b\b\y\2\7\k\n\b\f\8\m\s\z\9\9\o\z\n\h\e\y\1\w\u\4\k\p\9\9\d\l\a\g\j\l\u\9\p\c\k\0\j\v\m\9\q\y\p\p\e\u\j\s\e\3\k\5\2\3\8\r\q\h\w\o\8\s\c\7\l\d\p\2\h\w\v\n\m\w\o\e\3\z\o\d\v\1\j\w\z\5\e\n\r\o\k\j\l\n\k\k\t\g\j\u\y\0\g\7\e\h\p\4\0\b\z\g\2\g\1\4\w\8\8\6\f\3\o\e\5\p\s\2\r\0\m\x\e\o\1\r\w\o\j\2\b\a\k\6\9\a\j\u\7\1\v\x\1\i\y\c\l\e\v\6\n\1\v\f\k\a\1\e\6\v\3\l\w\9\g\q\d\p\y\g\9\9\g\5\p\2\k\2\z\o\0\4\k\u\8\y\4\u\7\a\n\o\q\8\g\v\l\b\3\0\n\r\2\4\1\z\r\t\h\g\s\z\k\e\p\r\o\5\4\f\w\b\r\a\d\2\n\7\2\f\g\8\z\d\e\u\w\1\7\w\5\4\e\4\h\u\c\y\n\7\k\u\d\q\w\7\w\f\t\j\p\c\q\m\o\2\v\1\3\o\k\z\c\6\u\1\1\5\4\v\z\p\e\g\a\q\d\c\7\t\l\a\k\e\m\h\n\m\6\q\v\5\d\o\n\b\o\n\0\y\p\a\z\z\0\d\q\y\w\n\8\0\n\m\t\c\c\3\4\c\u\3\u\8\l\8\6\0\v\k\9\q\b\p\h\q\e\c\s\7\z\8\s\f\m\2\5\q\k\1\t\p\p\2\h\p\d\1\p\l\3\n\d\q\t\i\n\2\x\r\e\2\6\l\h\e\n\2\g\s\t\x\q\1\8\c\k\q\r\a\t\n\4\g\z\f\r\9\5\3\8\e\n\n\o\9\e\2\z\r\b\6\x\3\p\0\s\o\q\g\8\5\k\j\z\s\4\j\t\h\t\6\k\3\a\f\f\p\a\k\7\8\y\g\t\a\o\0\l\3\2\4\p\2\l\s\x\x\7\u\9\p\b\1\d\7\s\3\x\r\a\h\i\z\b\i\c\r\w\o\q\d\o\n\8\6\v\0\o\a\b\l\w\6\0\t\t\l\g\g\t\h\g\b\l\d\m\v\1\h\b\z\h\9\b\6\u\4\p\e\o\l\f\k\m\s\a\d\g\r\4\c\a\x\i\o\n\8\v\o\k\f\k\8\s\q\6\5\a\3\m\x\l\5\h\v\p\6\q\d\x\s\2\t\q\z\g\p\i\z\y\o\f\d\m\a\i\z\q\6\a\y\q\3\u\t\u\p\3\4\7\v\d\9\y\w\k\j\6\u\4\x\8\d\0\f\d\f\w\w\1\a\y\z\6\w\s\9\2\2\y\y\0\1\r\v\h\x\d\s\o\9\1\l\f\w\k\o\g\u\2\3\o\z\7\i\j\v\j\k\5\1\c\y\a\p\q\j\d\5\3\l\5\m\g\i\7\o\l\c\d\z ]] 00:08:13.056 00:08:13.056 real 0m3.511s 00:08:13.056 user 0m2.963s 00:08:13.056 sys 0m1.624s 00:08:13.056 ************************************ 00:08:13.056 END TEST dd_rw_offset 00:08:13.056 ************************************ 00:08:13.056 08:45:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.056 08:45:51 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:13.315 08:45:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:13.315 08:45:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:13.315 08:45:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:13.315 08:45:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:13.315 08:45:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:13.315 08:45:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:13.315 08:45:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:13.315 08:45:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:13.315 08:45:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:13.315 08:45:51 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:13.315 08:45:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:13.315 { 00:08:13.315 "subsystems": [ 00:08:13.315 { 00:08:13.315 "subsystem": "bdev", 00:08:13.315 "config": [ 00:08:13.315 { 00:08:13.315 "params": { 00:08:13.315 "trtype": "pcie", 00:08:13.315 "traddr": "0000:00:10.0", 00:08:13.315 "name": "Nvme0" 00:08:13.315 }, 00:08:13.315 "method": "bdev_nvme_attach_controller" 00:08:13.315 }, 00:08:13.315 { 00:08:13.315 "method": "bdev_wait_for_examine" 00:08:13.315 } 00:08:13.315 ] 00:08:13.315 } 00:08:13.315 ] 00:08:13.315 } 00:08:13.315 [2024-09-28 08:45:51.152889] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:13.315 [2024-09-28 08:45:51.153091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61606 ] 00:08:13.574 [2024-09-28 08:45:51.324398] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.574 [2024-09-28 08:45:51.472969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.833 [2024-09-28 08:45:51.620823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.770  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:14.770 00:08:15.027 08:45:52 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.027 00:08:15.027 real 0m40.786s 00:08:15.027 user 0m34.274s 00:08:15.027 sys 0m16.935s 00:08:15.027 ************************************ 00:08:15.027 END TEST spdk_dd_basic_rw 00:08:15.027 ************************************ 00:08:15.027 08:45:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.027 08:45:52 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:15.027 08:45:52 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:15.027 08:45:52 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:15.027 08:45:52 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.027 08:45:52 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:15.027 ************************************ 00:08:15.027 START TEST spdk_dd_posix 00:08:15.027 ************************************ 00:08:15.027 08:45:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:15.027 * Looking for test storage... 00:08:15.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:15.027 08:45:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:15.027 08:45:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:08:15.027 08:45:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:08:15.286 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:15.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.287 --rc genhtml_branch_coverage=1 00:08:15.287 --rc genhtml_function_coverage=1 00:08:15.287 --rc genhtml_legend=1 00:08:15.287 --rc geninfo_all_blocks=1 00:08:15.287 --rc geninfo_unexecuted_blocks=1 00:08:15.287 00:08:15.287 ' 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:15.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.287 --rc genhtml_branch_coverage=1 00:08:15.287 --rc genhtml_function_coverage=1 00:08:15.287 --rc genhtml_legend=1 00:08:15.287 --rc geninfo_all_blocks=1 00:08:15.287 --rc geninfo_unexecuted_blocks=1 00:08:15.287 00:08:15.287 ' 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:15.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.287 --rc genhtml_branch_coverage=1 00:08:15.287 --rc genhtml_function_coverage=1 00:08:15.287 --rc genhtml_legend=1 00:08:15.287 --rc geninfo_all_blocks=1 00:08:15.287 --rc geninfo_unexecuted_blocks=1 00:08:15.287 00:08:15.287 ' 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:15.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.287 --rc genhtml_branch_coverage=1 00:08:15.287 --rc genhtml_function_coverage=1 00:08:15.287 --rc genhtml_legend=1 00:08:15.287 --rc geninfo_all_blocks=1 00:08:15.287 --rc geninfo_unexecuted_blocks=1 00:08:15.287 00:08:15.287 ' 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:15.287 * First test run, liburing in use 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:15.287 ************************************ 00:08:15.287 START TEST dd_flag_append 00:08:15.287 ************************************ 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=mry4hr070g5jy6to3jw8qgv9yrlpwvd8 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=whzcysy8twnw1dvtvbsv9eib0f6tlg9x 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s mry4hr070g5jy6to3jw8qgv9yrlpwvd8 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s whzcysy8twnw1dvtvbsv9eib0f6tlg9x 00:08:15.287 08:45:53 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:15.287 [2024-09-28 08:45:53.188196] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:15.287 [2024-09-28 08:45:53.188374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61690 ] 00:08:15.546 [2024-09-28 08:45:53.356260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.546 [2024-09-28 08:45:53.516021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.806 [2024-09-28 08:45:53.663096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.184  Copying: 32/32 [B] (average 31 kBps) 00:08:17.184 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ whzcysy8twnw1dvtvbsv9eib0f6tlg9xmry4hr070g5jy6to3jw8qgv9yrlpwvd8 == \w\h\z\c\y\s\y\8\t\w\n\w\1\d\v\t\v\b\s\v\9\e\i\b\0\f\6\t\l\g\9\x\m\r\y\4\h\r\0\7\0\g\5\j\y\6\t\o\3\j\w\8\q\g\v\9\y\r\l\p\w\v\d\8 ]] 00:08:17.184 00:08:17.184 real 0m1.711s 00:08:17.184 user 0m1.390s 00:08:17.184 sys 0m0.861s 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:17.184 ************************************ 00:08:17.184 END TEST dd_flag_append 00:08:17.184 ************************************ 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:17.184 ************************************ 00:08:17.184 START TEST dd_flag_directory 00:08:17.184 ************************************ 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:17.184 08:45:54 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:17.184 [2024-09-28 08:45:54.941750] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:17.184 [2024-09-28 08:45:54.942233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61730 ] 00:08:17.184 [2024-09-28 08:45:55.112381] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.443 [2024-09-28 08:45:55.264680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.725 [2024-09-28 08:45:55.441534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.725 [2024-09-28 08:45:55.522783] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:17.725 [2024-09-28 08:45:55.522869] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:17.725 [2024-09-28 08:45:55.522891] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.300 [2024-09-28 08:45:56.120295] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.560 08:45:56 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:18.820 [2024-09-28 08:45:56.584899] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:18.820 [2024-09-28 08:45:56.585096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61756 ] 00:08:18.820 [2024-09-28 08:45:56.755945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.079 [2024-09-28 08:45:56.916024] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.079 [2024-09-28 08:45:57.072790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.338 [2024-09-28 08:45:57.156565] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:19.338 [2024-09-28 08:45:57.156635] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:19.338 [2024-09-28 08:45:57.156659] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.907 [2024-09-28 08:45:57.758273] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:20.166 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:08:20.166 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:20.166 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:08:20.166 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:08:20.166 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:08:20.166 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:20.166 00:08:20.166 real 0m3.279s 00:08:20.166 user 0m2.648s 00:08:20.166 sys 0m0.408s 00:08:20.166 ************************************ 00:08:20.166 END TEST dd_flag_directory 00:08:20.166 ************************************ 00:08:20.166 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.166 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:20.166 08:45:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:20.166 08:45:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:20.166 08:45:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.166 08:45:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:20.425 ************************************ 00:08:20.425 START TEST dd_flag_nofollow 00:08:20.425 ************************************ 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:20.425 08:45:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:20.425 [2024-09-28 08:45:58.283000] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:20.425 [2024-09-28 08:45:58.283177] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61798 ] 00:08:20.686 [2024-09-28 08:45:58.453086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.686 [2024-09-28 08:45:58.613842] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.945 [2024-09-28 08:45:58.776830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.945 [2024-09-28 08:45:58.868252] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:20.945 [2024-09-28 08:45:58.868631] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:20.945 [2024-09-28 08:45:58.868666] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.511 [2024-09-28 08:45:59.494296] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:22.079 08:45:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:22.079 [2024-09-28 08:45:59.991734] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:22.079 [2024-09-28 08:45:59.991959] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61819 ] 00:08:22.338 [2024-09-28 08:46:00.162692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.338 [2024-09-28 08:46:00.321311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.598 [2024-09-28 08:46:00.475255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.598 [2024-09-28 08:46:00.559202] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:22.598 [2024-09-28 08:46:00.559285] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:22.598 [2024-09-28 08:46:00.559309] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:23.532 [2024-09-28 08:46:01.192561] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:23.791 08:46:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:08:23.791 08:46:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.791 08:46:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:08:23.791 08:46:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:08:23.791 08:46:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:08:23.791 08:46:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.791 08:46:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:23.791 08:46:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:23.791 08:46:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:23.791 08:46:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:23.791 [2024-09-28 08:46:01.700967] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:23.791 [2024-09-28 08:46:01.701170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61839 ] 00:08:24.050 [2024-09-28 08:46:01.868240] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.308 [2024-09-28 08:46:02.045049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.308 [2024-09-28 08:46:02.214266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.503  Copying: 512/512 [B] (average 500 kBps) 00:08:25.503 00:08:25.503 ************************************ 00:08:25.503 END TEST dd_flag_nofollow 00:08:25.503 ************************************ 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 9c5h5gtn32vbrix6k6mqpv2bi6mbtlind5uqs3n4rexsf3dpjape7ghf3tjtg6en21oegel9rottjueiq5hn8l53k5glkfv9lltor3ezpg9pfdg8fa2s3i9l6z2h9hfa3zbc8s7hmfbtucwsgnq7s6ffiwd8ukuucqiwkydjms4jjs81edqs3sfoq514giyavmgq81wloi676azhnv3zrxl01bq3oxky9e30kdse42hhmmq0vg1m4s4an30jtvus7h0j5lfxk55f4k9dmyduwie9anyzckwfmu5tkm2qz9ux7frx918xhjuy4rp37bi1rakv4vgr8cdxzrfvnbqcrctk3ebjuu9rnds3feofpj6le6f59qrh2gcwqd0hlbv5tajv6glsmj5791djwlw14ny9ldiy4u0i1bgdg57nuqkogdwajar3437byxoiv4vivhhyw4t0rsfm5hktyrjdig8zxrd386cs5fbip6fa94lpmupq2r7h0q74kr0n6f3w == \9\c\5\h\5\g\t\n\3\2\v\b\r\i\x\6\k\6\m\q\p\v\2\b\i\6\m\b\t\l\i\n\d\5\u\q\s\3\n\4\r\e\x\s\f\3\d\p\j\a\p\e\7\g\h\f\3\t\j\t\g\6\e\n\2\1\o\e\g\e\l\9\r\o\t\t\j\u\e\i\q\5\h\n\8\l\5\3\k\5\g\l\k\f\v\9\l\l\t\o\r\3\e\z\p\g\9\p\f\d\g\8\f\a\2\s\3\i\9\l\6\z\2\h\9\h\f\a\3\z\b\c\8\s\7\h\m\f\b\t\u\c\w\s\g\n\q\7\s\6\f\f\i\w\d\8\u\k\u\u\c\q\i\w\k\y\d\j\m\s\4\j\j\s\8\1\e\d\q\s\3\s\f\o\q\5\1\4\g\i\y\a\v\m\g\q\8\1\w\l\o\i\6\7\6\a\z\h\n\v\3\z\r\x\l\0\1\b\q\3\o\x\k\y\9\e\3\0\k\d\s\e\4\2\h\h\m\m\q\0\v\g\1\m\4\s\4\a\n\3\0\j\t\v\u\s\7\h\0\j\5\l\f\x\k\5\5\f\4\k\9\d\m\y\d\u\w\i\e\9\a\n\y\z\c\k\w\f\m\u\5\t\k\m\2\q\z\9\u\x\7\f\r\x\9\1\8\x\h\j\u\y\4\r\p\3\7\b\i\1\r\a\k\v\4\v\g\r\8\c\d\x\z\r\f\v\n\b\q\c\r\c\t\k\3\e\b\j\u\u\9\r\n\d\s\3\f\e\o\f\p\j\6\l\e\6\f\5\9\q\r\h\2\g\c\w\q\d\0\h\l\b\v\5\t\a\j\v\6\g\l\s\m\j\5\7\9\1\d\j\w\l\w\1\4\n\y\9\l\d\i\y\4\u\0\i\1\b\g\d\g\5\7\n\u\q\k\o\g\d\w\a\j\a\r\3\4\3\7\b\y\x\o\i\v\4\v\i\v\h\h\y\w\4\t\0\r\s\f\m\5\h\k\t\y\r\j\d\i\g\8\z\x\r\d\3\8\6\c\s\5\f\b\i\p\6\f\a\9\4\l\p\m\u\p\q\2\r\7\h\0\q\7\4\k\r\0\n\6\f\3\w ]] 00:08:25.503 00:08:25.503 real 0m5.201s 00:08:25.503 user 0m4.244s 00:08:25.503 sys 0m1.275s 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:25.503 ************************************ 00:08:25.503 START TEST dd_flag_noatime 00:08:25.503 ************************************ 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1727513162 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1727513163 00:08:25.503 08:46:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:26.882 08:46:04 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.882 [2024-09-28 08:46:04.557263] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:26.882 [2024-09-28 08:46:04.557474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61899 ] 00:08:26.882 [2024-09-28 08:46:04.728567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.142 [2024-09-28 08:46:04.888920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.142 [2024-09-28 08:46:05.054812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.412  Copying: 512/512 [B] (average 500 kBps) 00:08:28.412 00:08:28.412 08:46:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:28.412 08:46:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1727513162 )) 00:08:28.412 08:46:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:28.412 08:46:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1727513163 )) 00:08:28.412 08:46:06 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:28.412 [2024-09-28 08:46:06.250829] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:28.412 [2024-09-28 08:46:06.251256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61924 ] 00:08:28.669 [2024-09-28 08:46:06.408252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.669 [2024-09-28 08:46:06.570758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.927 [2024-09-28 08:46:06.728168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.864  Copying: 512/512 [B] (average 500 kBps) 00:08:29.864 00:08:29.864 08:46:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:29.864 08:46:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1727513166 )) 00:08:29.864 00:08:29.864 real 0m4.430s 00:08:29.864 user 0m2.776s 00:08:29.864 sys 0m1.693s 00:08:29.864 08:46:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.864 08:46:07 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:29.864 ************************************ 00:08:29.864 END TEST dd_flag_noatime 00:08:29.864 ************************************ 00:08:30.123 08:46:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:30.123 08:46:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.123 08:46:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.123 08:46:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:30.123 ************************************ 00:08:30.123 START TEST dd_flags_misc 00:08:30.123 ************************************ 00:08:30.123 08:46:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:08:30.123 08:46:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:30.123 08:46:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:30.123 08:46:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:30.123 08:46:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:30.123 08:46:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:30.123 08:46:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:30.123 08:46:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:30.123 08:46:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:30.123 08:46:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:30.123 [2024-09-28 08:46:08.028232] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:30.123 [2024-09-28 08:46:08.028409] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61965 ] 00:08:30.382 [2024-09-28 08:46:08.201889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.640 [2024-09-28 08:46:08.408423] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.640 [2024-09-28 08:46:08.573932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.834  Copying: 512/512 [B] (average 500 kBps) 00:08:31.834 00:08:31.834 08:46:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sgttk3yc9xohrt27xnudixed9upp1fvzvo8ljwz5jqz67ldp8ebas2g6979ktpgbsfmfin271cenh4gu0e5yd59jveu0jxjy2ihk5t5wfz4pcyss3ekywc0fr7c3unherhvrc1aq06ylgicd9cz1eizxxbd32am6t42nhv26r3nbdvqzx2064tzpmp1lbam6qi88mvpkw45w1xobcbive8skiy06zuilx54s9hi7hpx59jcp2egwfacfdtrbt2ifm1lj5i3241rhnruomo28liz2lgk83mhkm9x0bm0k8zd4spk0gngsknnd2apsfeml3zvqkki323ada8hhl0zloxq0rhjddplyrcuk6e64dg81suzkvzr2v2j7xj183g17rzdw61z678yr76aj4s5dd6xktp4qr5mi79pizln3gx9t8kewve4w4bkeban8wylr7jr927teoh3y4zdzrl2zeeefrf9lmmchkkiascvakl8o24artpfuh9fyyyoolm3c == \s\g\t\t\k\3\y\c\9\x\o\h\r\t\2\7\x\n\u\d\i\x\e\d\9\u\p\p\1\f\v\z\v\o\8\l\j\w\z\5\j\q\z\6\7\l\d\p\8\e\b\a\s\2\g\6\9\7\9\k\t\p\g\b\s\f\m\f\i\n\2\7\1\c\e\n\h\4\g\u\0\e\5\y\d\5\9\j\v\e\u\0\j\x\j\y\2\i\h\k\5\t\5\w\f\z\4\p\c\y\s\s\3\e\k\y\w\c\0\f\r\7\c\3\u\n\h\e\r\h\v\r\c\1\a\q\0\6\y\l\g\i\c\d\9\c\z\1\e\i\z\x\x\b\d\3\2\a\m\6\t\4\2\n\h\v\2\6\r\3\n\b\d\v\q\z\x\2\0\6\4\t\z\p\m\p\1\l\b\a\m\6\q\i\8\8\m\v\p\k\w\4\5\w\1\x\o\b\c\b\i\v\e\8\s\k\i\y\0\6\z\u\i\l\x\5\4\s\9\h\i\7\h\p\x\5\9\j\c\p\2\e\g\w\f\a\c\f\d\t\r\b\t\2\i\f\m\1\l\j\5\i\3\2\4\1\r\h\n\r\u\o\m\o\2\8\l\i\z\2\l\g\k\8\3\m\h\k\m\9\x\0\b\m\0\k\8\z\d\4\s\p\k\0\g\n\g\s\k\n\n\d\2\a\p\s\f\e\m\l\3\z\v\q\k\k\i\3\2\3\a\d\a\8\h\h\l\0\z\l\o\x\q\0\r\h\j\d\d\p\l\y\r\c\u\k\6\e\6\4\d\g\8\1\s\u\z\k\v\z\r\2\v\2\j\7\x\j\1\8\3\g\1\7\r\z\d\w\6\1\z\6\7\8\y\r\7\6\a\j\4\s\5\d\d\6\x\k\t\p\4\q\r\5\m\i\7\9\p\i\z\l\n\3\g\x\9\t\8\k\e\w\v\e\4\w\4\b\k\e\b\a\n\8\w\y\l\r\7\j\r\9\2\7\t\e\o\h\3\y\4\z\d\z\r\l\2\z\e\e\e\f\r\f\9\l\m\m\c\h\k\k\i\a\s\c\v\a\k\l\8\o\2\4\a\r\t\p\f\u\h\9\f\y\y\y\o\o\l\m\3\c ]] 00:08:31.834 08:46:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:31.834 08:46:09 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:31.834 [2024-09-28 08:46:09.771796] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:31.834 [2024-09-28 08:46:09.771949] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61992 ] 00:08:32.092 [2024-09-28 08:46:09.927338] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.092 [2024-09-28 08:46:10.087101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.351 [2024-09-28 08:46:10.237223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.726  Copying: 512/512 [B] (average 500 kBps) 00:08:33.726 00:08:33.727 08:46:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sgttk3yc9xohrt27xnudixed9upp1fvzvo8ljwz5jqz67ldp8ebas2g6979ktpgbsfmfin271cenh4gu0e5yd59jveu0jxjy2ihk5t5wfz4pcyss3ekywc0fr7c3unherhvrc1aq06ylgicd9cz1eizxxbd32am6t42nhv26r3nbdvqzx2064tzpmp1lbam6qi88mvpkw45w1xobcbive8skiy06zuilx54s9hi7hpx59jcp2egwfacfdtrbt2ifm1lj5i3241rhnruomo28liz2lgk83mhkm9x0bm0k8zd4spk0gngsknnd2apsfeml3zvqkki323ada8hhl0zloxq0rhjddplyrcuk6e64dg81suzkvzr2v2j7xj183g17rzdw61z678yr76aj4s5dd6xktp4qr5mi79pizln3gx9t8kewve4w4bkeban8wylr7jr927teoh3y4zdzrl2zeeefrf9lmmchkkiascvakl8o24artpfuh9fyyyoolm3c == \s\g\t\t\k\3\y\c\9\x\o\h\r\t\2\7\x\n\u\d\i\x\e\d\9\u\p\p\1\f\v\z\v\o\8\l\j\w\z\5\j\q\z\6\7\l\d\p\8\e\b\a\s\2\g\6\9\7\9\k\t\p\g\b\s\f\m\f\i\n\2\7\1\c\e\n\h\4\g\u\0\e\5\y\d\5\9\j\v\e\u\0\j\x\j\y\2\i\h\k\5\t\5\w\f\z\4\p\c\y\s\s\3\e\k\y\w\c\0\f\r\7\c\3\u\n\h\e\r\h\v\r\c\1\a\q\0\6\y\l\g\i\c\d\9\c\z\1\e\i\z\x\x\b\d\3\2\a\m\6\t\4\2\n\h\v\2\6\r\3\n\b\d\v\q\z\x\2\0\6\4\t\z\p\m\p\1\l\b\a\m\6\q\i\8\8\m\v\p\k\w\4\5\w\1\x\o\b\c\b\i\v\e\8\s\k\i\y\0\6\z\u\i\l\x\5\4\s\9\h\i\7\h\p\x\5\9\j\c\p\2\e\g\w\f\a\c\f\d\t\r\b\t\2\i\f\m\1\l\j\5\i\3\2\4\1\r\h\n\r\u\o\m\o\2\8\l\i\z\2\l\g\k\8\3\m\h\k\m\9\x\0\b\m\0\k\8\z\d\4\s\p\k\0\g\n\g\s\k\n\n\d\2\a\p\s\f\e\m\l\3\z\v\q\k\k\i\3\2\3\a\d\a\8\h\h\l\0\z\l\o\x\q\0\r\h\j\d\d\p\l\y\r\c\u\k\6\e\6\4\d\g\8\1\s\u\z\k\v\z\r\2\v\2\j\7\x\j\1\8\3\g\1\7\r\z\d\w\6\1\z\6\7\8\y\r\7\6\a\j\4\s\5\d\d\6\x\k\t\p\4\q\r\5\m\i\7\9\p\i\z\l\n\3\g\x\9\t\8\k\e\w\v\e\4\w\4\b\k\e\b\a\n\8\w\y\l\r\7\j\r\9\2\7\t\e\o\h\3\y\4\z\d\z\r\l\2\z\e\e\e\f\r\f\9\l\m\m\c\h\k\k\i\a\s\c\v\a\k\l\8\o\2\4\a\r\t\p\f\u\h\9\f\y\y\y\o\o\l\m\3\c ]] 00:08:33.727 08:46:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:33.727 08:46:11 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:33.727 [2024-09-28 08:46:11.411824] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:33.727 [2024-09-28 08:46:11.411992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62010 ] 00:08:33.727 [2024-09-28 08:46:11.572369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.985 [2024-09-28 08:46:11.736525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.985 [2024-09-28 08:46:11.886835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.367  Copying: 512/512 [B] (average 125 kBps) 00:08:35.367 00:08:35.367 08:46:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sgttk3yc9xohrt27xnudixed9upp1fvzvo8ljwz5jqz67ldp8ebas2g6979ktpgbsfmfin271cenh4gu0e5yd59jveu0jxjy2ihk5t5wfz4pcyss3ekywc0fr7c3unherhvrc1aq06ylgicd9cz1eizxxbd32am6t42nhv26r3nbdvqzx2064tzpmp1lbam6qi88mvpkw45w1xobcbive8skiy06zuilx54s9hi7hpx59jcp2egwfacfdtrbt2ifm1lj5i3241rhnruomo28liz2lgk83mhkm9x0bm0k8zd4spk0gngsknnd2apsfeml3zvqkki323ada8hhl0zloxq0rhjddplyrcuk6e64dg81suzkvzr2v2j7xj183g17rzdw61z678yr76aj4s5dd6xktp4qr5mi79pizln3gx9t8kewve4w4bkeban8wylr7jr927teoh3y4zdzrl2zeeefrf9lmmchkkiascvakl8o24artpfuh9fyyyoolm3c == \s\g\t\t\k\3\y\c\9\x\o\h\r\t\2\7\x\n\u\d\i\x\e\d\9\u\p\p\1\f\v\z\v\o\8\l\j\w\z\5\j\q\z\6\7\l\d\p\8\e\b\a\s\2\g\6\9\7\9\k\t\p\g\b\s\f\m\f\i\n\2\7\1\c\e\n\h\4\g\u\0\e\5\y\d\5\9\j\v\e\u\0\j\x\j\y\2\i\h\k\5\t\5\w\f\z\4\p\c\y\s\s\3\e\k\y\w\c\0\f\r\7\c\3\u\n\h\e\r\h\v\r\c\1\a\q\0\6\y\l\g\i\c\d\9\c\z\1\e\i\z\x\x\b\d\3\2\a\m\6\t\4\2\n\h\v\2\6\r\3\n\b\d\v\q\z\x\2\0\6\4\t\z\p\m\p\1\l\b\a\m\6\q\i\8\8\m\v\p\k\w\4\5\w\1\x\o\b\c\b\i\v\e\8\s\k\i\y\0\6\z\u\i\l\x\5\4\s\9\h\i\7\h\p\x\5\9\j\c\p\2\e\g\w\f\a\c\f\d\t\r\b\t\2\i\f\m\1\l\j\5\i\3\2\4\1\r\h\n\r\u\o\m\o\2\8\l\i\z\2\l\g\k\8\3\m\h\k\m\9\x\0\b\m\0\k\8\z\d\4\s\p\k\0\g\n\g\s\k\n\n\d\2\a\p\s\f\e\m\l\3\z\v\q\k\k\i\3\2\3\a\d\a\8\h\h\l\0\z\l\o\x\q\0\r\h\j\d\d\p\l\y\r\c\u\k\6\e\6\4\d\g\8\1\s\u\z\k\v\z\r\2\v\2\j\7\x\j\1\8\3\g\1\7\r\z\d\w\6\1\z\6\7\8\y\r\7\6\a\j\4\s\5\d\d\6\x\k\t\p\4\q\r\5\m\i\7\9\p\i\z\l\n\3\g\x\9\t\8\k\e\w\v\e\4\w\4\b\k\e\b\a\n\8\w\y\l\r\7\j\r\9\2\7\t\e\o\h\3\y\4\z\d\z\r\l\2\z\e\e\e\f\r\f\9\l\m\m\c\h\k\k\i\a\s\c\v\a\k\l\8\o\2\4\a\r\t\p\f\u\h\9\f\y\y\y\o\o\l\m\3\c ]] 00:08:35.367 08:46:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:35.367 08:46:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:35.367 [2024-09-28 08:46:13.057763] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:35.367 [2024-09-28 08:46:13.057950] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62035 ] 00:08:35.367 [2024-09-28 08:46:13.224029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.625 [2024-09-28 08:46:13.393246] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.625 [2024-09-28 08:46:13.560138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.818  Copying: 512/512 [B] (average 166 kBps) 00:08:36.818 00:08:36.818 08:46:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ sgttk3yc9xohrt27xnudixed9upp1fvzvo8ljwz5jqz67ldp8ebas2g6979ktpgbsfmfin271cenh4gu0e5yd59jveu0jxjy2ihk5t5wfz4pcyss3ekywc0fr7c3unherhvrc1aq06ylgicd9cz1eizxxbd32am6t42nhv26r3nbdvqzx2064tzpmp1lbam6qi88mvpkw45w1xobcbive8skiy06zuilx54s9hi7hpx59jcp2egwfacfdtrbt2ifm1lj5i3241rhnruomo28liz2lgk83mhkm9x0bm0k8zd4spk0gngsknnd2apsfeml3zvqkki323ada8hhl0zloxq0rhjddplyrcuk6e64dg81suzkvzr2v2j7xj183g17rzdw61z678yr76aj4s5dd6xktp4qr5mi79pizln3gx9t8kewve4w4bkeban8wylr7jr927teoh3y4zdzrl2zeeefrf9lmmchkkiascvakl8o24artpfuh9fyyyoolm3c == \s\g\t\t\k\3\y\c\9\x\o\h\r\t\2\7\x\n\u\d\i\x\e\d\9\u\p\p\1\f\v\z\v\o\8\l\j\w\z\5\j\q\z\6\7\l\d\p\8\e\b\a\s\2\g\6\9\7\9\k\t\p\g\b\s\f\m\f\i\n\2\7\1\c\e\n\h\4\g\u\0\e\5\y\d\5\9\j\v\e\u\0\j\x\j\y\2\i\h\k\5\t\5\w\f\z\4\p\c\y\s\s\3\e\k\y\w\c\0\f\r\7\c\3\u\n\h\e\r\h\v\r\c\1\a\q\0\6\y\l\g\i\c\d\9\c\z\1\e\i\z\x\x\b\d\3\2\a\m\6\t\4\2\n\h\v\2\6\r\3\n\b\d\v\q\z\x\2\0\6\4\t\z\p\m\p\1\l\b\a\m\6\q\i\8\8\m\v\p\k\w\4\5\w\1\x\o\b\c\b\i\v\e\8\s\k\i\y\0\6\z\u\i\l\x\5\4\s\9\h\i\7\h\p\x\5\9\j\c\p\2\e\g\w\f\a\c\f\d\t\r\b\t\2\i\f\m\1\l\j\5\i\3\2\4\1\r\h\n\r\u\o\m\o\2\8\l\i\z\2\l\g\k\8\3\m\h\k\m\9\x\0\b\m\0\k\8\z\d\4\s\p\k\0\g\n\g\s\k\n\n\d\2\a\p\s\f\e\m\l\3\z\v\q\k\k\i\3\2\3\a\d\a\8\h\h\l\0\z\l\o\x\q\0\r\h\j\d\d\p\l\y\r\c\u\k\6\e\6\4\d\g\8\1\s\u\z\k\v\z\r\2\v\2\j\7\x\j\1\8\3\g\1\7\r\z\d\w\6\1\z\6\7\8\y\r\7\6\a\j\4\s\5\d\d\6\x\k\t\p\4\q\r\5\m\i\7\9\p\i\z\l\n\3\g\x\9\t\8\k\e\w\v\e\4\w\4\b\k\e\b\a\n\8\w\y\l\r\7\j\r\9\2\7\t\e\o\h\3\y\4\z\d\z\r\l\2\z\e\e\e\f\r\f\9\l\m\m\c\h\k\k\i\a\s\c\v\a\k\l\8\o\2\4\a\r\t\p\f\u\h\9\f\y\y\y\o\o\l\m\3\c ]] 00:08:36.818 08:46:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:36.818 08:46:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:36.818 08:46:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:36.818 08:46:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:36.818 08:46:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:36.818 08:46:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:37.077 [2024-09-28 08:46:14.825960] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:37.077 [2024-09-28 08:46:14.826132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62062 ] 00:08:37.077 [2024-09-28 08:46:14.996009] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.336 [2024-09-28 08:46:15.160578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.336 [2024-09-28 08:46:15.318432] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.531  Copying: 512/512 [B] (average 500 kBps) 00:08:38.531 00:08:38.532 08:46:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ d8z5gza66v89kubagg4ltgzwqvanbhk93r1s5bsapjxylgtt1um0fvm3pa8txjz4lb1ddagw5yjmg4qudfdh6zx68zdfsfaqonrw5jjdsa95vi86hibvjpyyc794lj1h7hwlsq1jqcuis3c43gqjzfgf6k3umahh1alnww07bhf6lsk593ru7fsy01wznteuna6mm7dzi8df69g3g18ij0kjjk9m5u8sic59u577ikvrlidz8ypzzkxibvcvik2xenlswr0fb9e8z8fzeh7aca8xdv4y58ffosrq6c2vbg86kqtctewzk1w1mnxldxf8hu68na6jol55dib2etajnehnyjkbcsd1tumo16113i3z5p0zulvt6tt4b2lxmj0gq16tz42guwxi689cxfc97nbh1e697dtqr1dnwgctoirvzoqa5jftotmjp1w093u17xwp3lda8ew5s0k2almj4ojviqm3hmc2l1b67sxue732tiqh29wo34wcr8lrgt2q == \d\8\z\5\g\z\a\6\6\v\8\9\k\u\b\a\g\g\4\l\t\g\z\w\q\v\a\n\b\h\k\9\3\r\1\s\5\b\s\a\p\j\x\y\l\g\t\t\1\u\m\0\f\v\m\3\p\a\8\t\x\j\z\4\l\b\1\d\d\a\g\w\5\y\j\m\g\4\q\u\d\f\d\h\6\z\x\6\8\z\d\f\s\f\a\q\o\n\r\w\5\j\j\d\s\a\9\5\v\i\8\6\h\i\b\v\j\p\y\y\c\7\9\4\l\j\1\h\7\h\w\l\s\q\1\j\q\c\u\i\s\3\c\4\3\g\q\j\z\f\g\f\6\k\3\u\m\a\h\h\1\a\l\n\w\w\0\7\b\h\f\6\l\s\k\5\9\3\r\u\7\f\s\y\0\1\w\z\n\t\e\u\n\a\6\m\m\7\d\z\i\8\d\f\6\9\g\3\g\1\8\i\j\0\k\j\j\k\9\m\5\u\8\s\i\c\5\9\u\5\7\7\i\k\v\r\l\i\d\z\8\y\p\z\z\k\x\i\b\v\c\v\i\k\2\x\e\n\l\s\w\r\0\f\b\9\e\8\z\8\f\z\e\h\7\a\c\a\8\x\d\v\4\y\5\8\f\f\o\s\r\q\6\c\2\v\b\g\8\6\k\q\t\c\t\e\w\z\k\1\w\1\m\n\x\l\d\x\f\8\h\u\6\8\n\a\6\j\o\l\5\5\d\i\b\2\e\t\a\j\n\e\h\n\y\j\k\b\c\s\d\1\t\u\m\o\1\6\1\1\3\i\3\z\5\p\0\z\u\l\v\t\6\t\t\4\b\2\l\x\m\j\0\g\q\1\6\t\z\4\2\g\u\w\x\i\6\8\9\c\x\f\c\9\7\n\b\h\1\e\6\9\7\d\t\q\r\1\d\n\w\g\c\t\o\i\r\v\z\o\q\a\5\j\f\t\o\t\m\j\p\1\w\0\9\3\u\1\7\x\w\p\3\l\d\a\8\e\w\5\s\0\k\2\a\l\m\j\4\o\j\v\i\q\m\3\h\m\c\2\l\1\b\6\7\s\x\u\e\7\3\2\t\i\q\h\2\9\w\o\3\4\w\c\r\8\l\r\g\t\2\q ]] 00:08:38.532 08:46:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:38.532 08:46:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:38.532 [2024-09-28 08:46:16.477431] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:38.532 [2024-09-28 08:46:16.477596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62078 ] 00:08:38.797 [2024-09-28 08:46:16.646053] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.057 [2024-09-28 08:46:16.806703] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.057 [2024-09-28 08:46:16.961696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.435  Copying: 512/512 [B] (average 500 kBps) 00:08:40.435 00:08:40.435 08:46:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ d8z5gza66v89kubagg4ltgzwqvanbhk93r1s5bsapjxylgtt1um0fvm3pa8txjz4lb1ddagw5yjmg4qudfdh6zx68zdfsfaqonrw5jjdsa95vi86hibvjpyyc794lj1h7hwlsq1jqcuis3c43gqjzfgf6k3umahh1alnww07bhf6lsk593ru7fsy01wznteuna6mm7dzi8df69g3g18ij0kjjk9m5u8sic59u577ikvrlidz8ypzzkxibvcvik2xenlswr0fb9e8z8fzeh7aca8xdv4y58ffosrq6c2vbg86kqtctewzk1w1mnxldxf8hu68na6jol55dib2etajnehnyjkbcsd1tumo16113i3z5p0zulvt6tt4b2lxmj0gq16tz42guwxi689cxfc97nbh1e697dtqr1dnwgctoirvzoqa5jftotmjp1w093u17xwp3lda8ew5s0k2almj4ojviqm3hmc2l1b67sxue732tiqh29wo34wcr8lrgt2q == \d\8\z\5\g\z\a\6\6\v\8\9\k\u\b\a\g\g\4\l\t\g\z\w\q\v\a\n\b\h\k\9\3\r\1\s\5\b\s\a\p\j\x\y\l\g\t\t\1\u\m\0\f\v\m\3\p\a\8\t\x\j\z\4\l\b\1\d\d\a\g\w\5\y\j\m\g\4\q\u\d\f\d\h\6\z\x\6\8\z\d\f\s\f\a\q\o\n\r\w\5\j\j\d\s\a\9\5\v\i\8\6\h\i\b\v\j\p\y\y\c\7\9\4\l\j\1\h\7\h\w\l\s\q\1\j\q\c\u\i\s\3\c\4\3\g\q\j\z\f\g\f\6\k\3\u\m\a\h\h\1\a\l\n\w\w\0\7\b\h\f\6\l\s\k\5\9\3\r\u\7\f\s\y\0\1\w\z\n\t\e\u\n\a\6\m\m\7\d\z\i\8\d\f\6\9\g\3\g\1\8\i\j\0\k\j\j\k\9\m\5\u\8\s\i\c\5\9\u\5\7\7\i\k\v\r\l\i\d\z\8\y\p\z\z\k\x\i\b\v\c\v\i\k\2\x\e\n\l\s\w\r\0\f\b\9\e\8\z\8\f\z\e\h\7\a\c\a\8\x\d\v\4\y\5\8\f\f\o\s\r\q\6\c\2\v\b\g\8\6\k\q\t\c\t\e\w\z\k\1\w\1\m\n\x\l\d\x\f\8\h\u\6\8\n\a\6\j\o\l\5\5\d\i\b\2\e\t\a\j\n\e\h\n\y\j\k\b\c\s\d\1\t\u\m\o\1\6\1\1\3\i\3\z\5\p\0\z\u\l\v\t\6\t\t\4\b\2\l\x\m\j\0\g\q\1\6\t\z\4\2\g\u\w\x\i\6\8\9\c\x\f\c\9\7\n\b\h\1\e\6\9\7\d\t\q\r\1\d\n\w\g\c\t\o\i\r\v\z\o\q\a\5\j\f\t\o\t\m\j\p\1\w\0\9\3\u\1\7\x\w\p\3\l\d\a\8\e\w\5\s\0\k\2\a\l\m\j\4\o\j\v\i\q\m\3\h\m\c\2\l\1\b\6\7\s\x\u\e\7\3\2\t\i\q\h\2\9\w\o\3\4\w\c\r\8\l\r\g\t\2\q ]] 00:08:40.435 08:46:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:40.435 08:46:18 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:40.435 [2024-09-28 08:46:18.150065] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:40.435 [2024-09-28 08:46:18.150246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62105 ] 00:08:40.435 [2024-09-28 08:46:18.320082] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.694 [2024-09-28 08:46:18.476034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.694 [2024-09-28 08:46:18.635577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.890  Copying: 512/512 [B] (average 500 kBps) 00:08:41.890 00:08:41.891 08:46:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ d8z5gza66v89kubagg4ltgzwqvanbhk93r1s5bsapjxylgtt1um0fvm3pa8txjz4lb1ddagw5yjmg4qudfdh6zx68zdfsfaqonrw5jjdsa95vi86hibvjpyyc794lj1h7hwlsq1jqcuis3c43gqjzfgf6k3umahh1alnww07bhf6lsk593ru7fsy01wznteuna6mm7dzi8df69g3g18ij0kjjk9m5u8sic59u577ikvrlidz8ypzzkxibvcvik2xenlswr0fb9e8z8fzeh7aca8xdv4y58ffosrq6c2vbg86kqtctewzk1w1mnxldxf8hu68na6jol55dib2etajnehnyjkbcsd1tumo16113i3z5p0zulvt6tt4b2lxmj0gq16tz42guwxi689cxfc97nbh1e697dtqr1dnwgctoirvzoqa5jftotmjp1w093u17xwp3lda8ew5s0k2almj4ojviqm3hmc2l1b67sxue732tiqh29wo34wcr8lrgt2q == \d\8\z\5\g\z\a\6\6\v\8\9\k\u\b\a\g\g\4\l\t\g\z\w\q\v\a\n\b\h\k\9\3\r\1\s\5\b\s\a\p\j\x\y\l\g\t\t\1\u\m\0\f\v\m\3\p\a\8\t\x\j\z\4\l\b\1\d\d\a\g\w\5\y\j\m\g\4\q\u\d\f\d\h\6\z\x\6\8\z\d\f\s\f\a\q\o\n\r\w\5\j\j\d\s\a\9\5\v\i\8\6\h\i\b\v\j\p\y\y\c\7\9\4\l\j\1\h\7\h\w\l\s\q\1\j\q\c\u\i\s\3\c\4\3\g\q\j\z\f\g\f\6\k\3\u\m\a\h\h\1\a\l\n\w\w\0\7\b\h\f\6\l\s\k\5\9\3\r\u\7\f\s\y\0\1\w\z\n\t\e\u\n\a\6\m\m\7\d\z\i\8\d\f\6\9\g\3\g\1\8\i\j\0\k\j\j\k\9\m\5\u\8\s\i\c\5\9\u\5\7\7\i\k\v\r\l\i\d\z\8\y\p\z\z\k\x\i\b\v\c\v\i\k\2\x\e\n\l\s\w\r\0\f\b\9\e\8\z\8\f\z\e\h\7\a\c\a\8\x\d\v\4\y\5\8\f\f\o\s\r\q\6\c\2\v\b\g\8\6\k\q\t\c\t\e\w\z\k\1\w\1\m\n\x\l\d\x\f\8\h\u\6\8\n\a\6\j\o\l\5\5\d\i\b\2\e\t\a\j\n\e\h\n\y\j\k\b\c\s\d\1\t\u\m\o\1\6\1\1\3\i\3\z\5\p\0\z\u\l\v\t\6\t\t\4\b\2\l\x\m\j\0\g\q\1\6\t\z\4\2\g\u\w\x\i\6\8\9\c\x\f\c\9\7\n\b\h\1\e\6\9\7\d\t\q\r\1\d\n\w\g\c\t\o\i\r\v\z\o\q\a\5\j\f\t\o\t\m\j\p\1\w\0\9\3\u\1\7\x\w\p\3\l\d\a\8\e\w\5\s\0\k\2\a\l\m\j\4\o\j\v\i\q\m\3\h\m\c\2\l\1\b\6\7\s\x\u\e\7\3\2\t\i\q\h\2\9\w\o\3\4\w\c\r\8\l\r\g\t\2\q ]] 00:08:41.891 08:46:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:41.891 08:46:19 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:41.891 [2024-09-28 08:46:19.809102] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:41.891 [2024-09-28 08:46:19.809242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62127 ] 00:08:42.149 [2024-09-28 08:46:19.972892] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.149 [2024-09-28 08:46:20.138738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.408 [2024-09-28 08:46:20.293155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.786  Copying: 512/512 [B] (average 250 kBps) 00:08:43.786 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ d8z5gza66v89kubagg4ltgzwqvanbhk93r1s5bsapjxylgtt1um0fvm3pa8txjz4lb1ddagw5yjmg4qudfdh6zx68zdfsfaqonrw5jjdsa95vi86hibvjpyyc794lj1h7hwlsq1jqcuis3c43gqjzfgf6k3umahh1alnww07bhf6lsk593ru7fsy01wznteuna6mm7dzi8df69g3g18ij0kjjk9m5u8sic59u577ikvrlidz8ypzzkxibvcvik2xenlswr0fb9e8z8fzeh7aca8xdv4y58ffosrq6c2vbg86kqtctewzk1w1mnxldxf8hu68na6jol55dib2etajnehnyjkbcsd1tumo16113i3z5p0zulvt6tt4b2lxmj0gq16tz42guwxi689cxfc97nbh1e697dtqr1dnwgctoirvzoqa5jftotmjp1w093u17xwp3lda8ew5s0k2almj4ojviqm3hmc2l1b67sxue732tiqh29wo34wcr8lrgt2q == \d\8\z\5\g\z\a\6\6\v\8\9\k\u\b\a\g\g\4\l\t\g\z\w\q\v\a\n\b\h\k\9\3\r\1\s\5\b\s\a\p\j\x\y\l\g\t\t\1\u\m\0\f\v\m\3\p\a\8\t\x\j\z\4\l\b\1\d\d\a\g\w\5\y\j\m\g\4\q\u\d\f\d\h\6\z\x\6\8\z\d\f\s\f\a\q\o\n\r\w\5\j\j\d\s\a\9\5\v\i\8\6\h\i\b\v\j\p\y\y\c\7\9\4\l\j\1\h\7\h\w\l\s\q\1\j\q\c\u\i\s\3\c\4\3\g\q\j\z\f\g\f\6\k\3\u\m\a\h\h\1\a\l\n\w\w\0\7\b\h\f\6\l\s\k\5\9\3\r\u\7\f\s\y\0\1\w\z\n\t\e\u\n\a\6\m\m\7\d\z\i\8\d\f\6\9\g\3\g\1\8\i\j\0\k\j\j\k\9\m\5\u\8\s\i\c\5\9\u\5\7\7\i\k\v\r\l\i\d\z\8\y\p\z\z\k\x\i\b\v\c\v\i\k\2\x\e\n\l\s\w\r\0\f\b\9\e\8\z\8\f\z\e\h\7\a\c\a\8\x\d\v\4\y\5\8\f\f\o\s\r\q\6\c\2\v\b\g\8\6\k\q\t\c\t\e\w\z\k\1\w\1\m\n\x\l\d\x\f\8\h\u\6\8\n\a\6\j\o\l\5\5\d\i\b\2\e\t\a\j\n\e\h\n\y\j\k\b\c\s\d\1\t\u\m\o\1\6\1\1\3\i\3\z\5\p\0\z\u\l\v\t\6\t\t\4\b\2\l\x\m\j\0\g\q\1\6\t\z\4\2\g\u\w\x\i\6\8\9\c\x\f\c\9\7\n\b\h\1\e\6\9\7\d\t\q\r\1\d\n\w\g\c\t\o\i\r\v\z\o\q\a\5\j\f\t\o\t\m\j\p\1\w\0\9\3\u\1\7\x\w\p\3\l\d\a\8\e\w\5\s\0\k\2\a\l\m\j\4\o\j\v\i\q\m\3\h\m\c\2\l\1\b\6\7\s\x\u\e\7\3\2\t\i\q\h\2\9\w\o\3\4\w\c\r\8\l\r\g\t\2\q ]] 00:08:43.786 00:08:43.786 real 0m13.464s 00:08:43.786 user 0m11.001s 00:08:43.786 sys 0m6.598s 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:43.786 ************************************ 00:08:43.786 END TEST dd_flags_misc 00:08:43.786 ************************************ 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:43.786 * Second test run, disabling liburing, forcing AIO 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:43.786 ************************************ 00:08:43.786 START TEST dd_flag_append_forced_aio 00:08:43.786 ************************************ 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=qz2yti9t4yy4gjy0fgtlvtq2hs50j7d5 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=tiprgk7g8o00oa1u1g71ukjvjsfms9nf 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s qz2yti9t4yy4gjy0fgtlvtq2hs50j7d5 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s tiprgk7g8o00oa1u1g71ukjvjsfms9nf 00:08:43.786 08:46:21 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:43.786 [2024-09-28 08:46:21.521805] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:43.786 [2024-09-28 08:46:21.521986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62167 ] 00:08:43.786 [2024-09-28 08:46:21.679964] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.046 [2024-09-28 08:46:21.829690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.046 [2024-09-28 08:46:21.981614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.243  Copying: 32/32 [B] (average 31 kBps) 00:08:45.243 00:08:45.243 08:46:22 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ tiprgk7g8o00oa1u1g71ukjvjsfms9nfqz2yti9t4yy4gjy0fgtlvtq2hs50j7d5 == \t\i\p\r\g\k\7\g\8\o\0\0\o\a\1\u\1\g\7\1\u\k\j\v\j\s\f\m\s\9\n\f\q\z\2\y\t\i\9\t\4\y\y\4\g\j\y\0\f\g\t\l\v\t\q\2\h\s\5\0\j\7\d\5 ]] 00:08:45.243 00:08:45.243 real 0m1.569s 00:08:45.243 user 0m1.279s 00:08:45.243 sys 0m0.170s 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.243 ************************************ 00:08:45.243 END TEST dd_flag_append_forced_aio 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:45.243 ************************************ 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:45.243 ************************************ 00:08:45.243 START TEST dd_flag_directory_forced_aio 00:08:45.243 ************************************ 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:45.243 08:46:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:45.243 [2024-09-28 08:46:23.165777] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:45.243 [2024-09-28 08:46:23.165970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62211 ] 00:08:45.503 [2024-09-28 08:46:23.335762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.503 [2024-09-28 08:46:23.497193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.762 [2024-09-28 08:46:23.668393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.762 [2024-09-28 08:46:23.749514] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:45.762 [2024-09-28 08:46:23.749587] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:45.762 [2024-09-28 08:46:23.749607] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:46.696 [2024-09-28 08:46:24.325395] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.955 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:46.956 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:46.956 08:46:24 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:46.956 [2024-09-28 08:46:24.816601] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:46.956 [2024-09-28 08:46:24.816776] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62233 ] 00:08:47.215 [2024-09-28 08:46:24.987222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.215 [2024-09-28 08:46:25.150256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.473 [2024-09-28 08:46:25.306233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.473 [2024-09-28 08:46:25.385439] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:47.473 [2024-09-28 08:46:25.385513] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:47.473 [2024-09-28 08:46:25.385535] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:48.046 [2024-09-28 08:46:25.969062] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:48.306 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:08:48.306 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:48.306 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:08:48.306 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:48.306 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:48.306 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:48.306 00:08:48.306 real 0m3.246s 00:08:48.306 user 0m2.649s 00:08:48.306 sys 0m0.377s 00:08:48.306 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.306 ************************************ 00:08:48.306 END TEST dd_flag_directory_forced_aio 00:08:48.306 ************************************ 00:08:48.306 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:48.564 08:46:26 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:48.564 08:46:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:48.564 08:46:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.564 08:46:26 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:48.564 ************************************ 00:08:48.564 START TEST dd_flag_nofollow_forced_aio 00:08:48.565 ************************************ 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.565 08:46:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:48.565 [2024-09-28 08:46:26.450036] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:48.565 [2024-09-28 08:46:26.450172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62273 ] 00:08:48.824 [2024-09-28 08:46:26.607610] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.824 [2024-09-28 08:46:26.770136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.083 [2024-09-28 08:46:26.917043] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.083 [2024-09-28 08:46:26.997322] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:49.083 [2024-09-28 08:46:26.997398] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:49.083 [2024-09-28 08:46:26.997419] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:49.665 [2024-09-28 08:46:27.598832] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:50.270 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:08:50.270 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:50.270 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:08:50.270 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:50.270 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:50.270 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:50.271 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:50.271 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:50.271 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:50.271 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.271 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.271 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.271 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.271 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.271 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.271 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.271 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:50.271 08:46:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:50.271 [2024-09-28 08:46:28.081653] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:50.271 [2024-09-28 08:46:28.081861] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62295 ] 00:08:50.271 [2024-09-28 08:46:28.252204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.531 [2024-09-28 08:46:28.424730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.791 [2024-09-28 08:46:28.581184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.791 [2024-09-28 08:46:28.660703] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:50.791 [2024-09-28 08:46:28.661087] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:50.791 [2024-09-28 08:46:28.661119] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:51.359 [2024-09-28 08:46:29.258959] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:51.618 08:46:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:08:51.618 08:46:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:51.618 08:46:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:08:51.618 08:46:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:51.618 08:46:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:51.618 08:46:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:51.618 08:46:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:51.618 08:46:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:51.618 08:46:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:51.877 08:46:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:51.877 [2024-09-28 08:46:29.692438] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:51.878 [2024-09-28 08:46:29.692796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62314 ] 00:08:51.878 [2024-09-28 08:46:29.847103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.136 [2024-09-28 08:46:30.001843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.396 [2024-09-28 08:46:30.156020] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:53.333  Copying: 512/512 [B] (average 500 kBps) 00:08:53.333 00:08:53.333 ************************************ 00:08:53.333 END TEST dd_flag_nofollow_forced_aio 00:08:53.333 ************************************ 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ svnojn1p4vxt5ukj7mwommfjdegfkvac6ey698zg61xffs6i5akp2ri6qm9xtpdc6kd1dn3qnt6q8y49lzqxquk37sr15fwgbz2mu03mzv1ok92uwgz941m9uarlcx4uxgbnujb73fgfi1uczzliv2d4se2vehl1ugz0kmbaq9xoirw060b2illuhdxpzdwoiynlm7nzlylwjzxt3p07opetc8zx39k3cor2w4rltjs5oezwno5gnh43fru15fk6yf3nvb6pguwb91ho5litm7zb1lt3dh6ex1ttehz49obrx1bq8610r1gmzhr2di6qd77151kxgn6di4tu47y2kru1m8s83mga4pb5u9u59mttc6gzkeotfdy1mgpkzjnls7k5xdhe8mswrf8tcoyxogk2jn1e63om9ezs7yhzh6izzya7z4virx75b6mly3abif89iurvhigx0mq8ec9new8rsax825fazc6tcsoknpk7usepcgeo5ejd7e75d769 == \s\v\n\o\j\n\1\p\4\v\x\t\5\u\k\j\7\m\w\o\m\m\f\j\d\e\g\f\k\v\a\c\6\e\y\6\9\8\z\g\6\1\x\f\f\s\6\i\5\a\k\p\2\r\i\6\q\m\9\x\t\p\d\c\6\k\d\1\d\n\3\q\n\t\6\q\8\y\4\9\l\z\q\x\q\u\k\3\7\s\r\1\5\f\w\g\b\z\2\m\u\0\3\m\z\v\1\o\k\9\2\u\w\g\z\9\4\1\m\9\u\a\r\l\c\x\4\u\x\g\b\n\u\j\b\7\3\f\g\f\i\1\u\c\z\z\l\i\v\2\d\4\s\e\2\v\e\h\l\1\u\g\z\0\k\m\b\a\q\9\x\o\i\r\w\0\6\0\b\2\i\l\l\u\h\d\x\p\z\d\w\o\i\y\n\l\m\7\n\z\l\y\l\w\j\z\x\t\3\p\0\7\o\p\e\t\c\8\z\x\3\9\k\3\c\o\r\2\w\4\r\l\t\j\s\5\o\e\z\w\n\o\5\g\n\h\4\3\f\r\u\1\5\f\k\6\y\f\3\n\v\b\6\p\g\u\w\b\9\1\h\o\5\l\i\t\m\7\z\b\1\l\t\3\d\h\6\e\x\1\t\t\e\h\z\4\9\o\b\r\x\1\b\q\8\6\1\0\r\1\g\m\z\h\r\2\d\i\6\q\d\7\7\1\5\1\k\x\g\n\6\d\i\4\t\u\4\7\y\2\k\r\u\1\m\8\s\8\3\m\g\a\4\p\b\5\u\9\u\5\9\m\t\t\c\6\g\z\k\e\o\t\f\d\y\1\m\g\p\k\z\j\n\l\s\7\k\5\x\d\h\e\8\m\s\w\r\f\8\t\c\o\y\x\o\g\k\2\j\n\1\e\6\3\o\m\9\e\z\s\7\y\h\z\h\6\i\z\z\y\a\7\z\4\v\i\r\x\7\5\b\6\m\l\y\3\a\b\i\f\8\9\i\u\r\v\h\i\g\x\0\m\q\8\e\c\9\n\e\w\8\r\s\a\x\8\2\5\f\a\z\c\6\t\c\s\o\k\n\p\k\7\u\s\e\p\c\g\e\o\5\e\j\d\7\e\7\5\d\7\6\9 ]] 00:08:53.333 00:08:53.333 real 0m4.871s 00:08:53.333 user 0m3.984s 00:08:53.333 sys 0m0.544s 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:53.333 ************************************ 00:08:53.333 START TEST dd_flag_noatime_forced_aio 00:08:53.333 ************************************ 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1727513190 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1727513191 00:08:53.333 08:46:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:54.711 08:46:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:54.711 [2024-09-28 08:46:32.420284] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:54.711 [2024-09-28 08:46:32.420466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62367 ] 00:08:54.711 [2024-09-28 08:46:32.595654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.969 [2024-09-28 08:46:32.788014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.969 [2024-09-28 08:46:32.940737] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:56.163  Copying: 512/512 [B] (average 500 kBps) 00:08:56.163 00:08:56.163 08:46:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:56.163 08:46:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1727513190 )) 00:08:56.163 08:46:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:56.163 08:46:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1727513191 )) 00:08:56.163 08:46:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:56.163 [2024-09-28 08:46:34.100402] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:56.163 [2024-09-28 08:46:34.100580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62396 ] 00:08:56.421 [2024-09-28 08:46:34.272171] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.680 [2024-09-28 08:46:34.425269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.680 [2024-09-28 08:46:34.569322] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.614  Copying: 512/512 [B] (average 500 kBps) 00:08:57.614 00:08:57.614 08:46:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:57.614 08:46:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1727513194 )) 00:08:57.615 00:08:57.615 real 0m4.316s 00:08:57.615 user 0m2.657s 00:08:57.615 sys 0m0.415s 00:08:57.615 ************************************ 00:08:57.615 END TEST dd_flag_noatime_forced_aio 00:08:57.615 ************************************ 00:08:57.615 08:46:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.615 08:46:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:57.873 08:46:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:57.873 08:46:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.873 08:46:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.873 08:46:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:57.873 ************************************ 00:08:57.873 START TEST dd_flags_misc_forced_aio 00:08:57.873 ************************************ 00:08:57.873 08:46:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:08:57.873 08:46:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:57.873 08:46:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:57.873 08:46:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:57.873 08:46:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:57.873 08:46:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:57.874 08:46:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:57.874 08:46:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:57.874 08:46:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:57.874 08:46:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:57.874 [2024-09-28 08:46:35.751044] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:57.874 [2024-09-28 08:46:35.751185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62429 ] 00:08:58.131 [2024-09-28 08:46:35.904689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.131 [2024-09-28 08:46:36.053140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.390 [2024-09-28 08:46:36.196733] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.326  Copying: 512/512 [B] (average 500 kBps) 00:08:59.326 00:08:59.326 08:46:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yzg17lrwl4q8a5go2v3qqytu1jay3zhk97pnc7js3sr5sdtnf0np45kgq3tzo685k6numwv3ii30pjcdsu71yecfaftwm3vx5cdidqe1se6nql738j76iabn9ouiag60e9q0xb5kg221pvbf3b5rsdvh5adj9obzi12lcf7r76tzqeplq8dm2g724x1hamagmtxcymz42nqv3y7xlhjrjw830ftiyowaj3youuq9wloq79quh4ohfbze6lswx2h53g8csw0ag9w482b9tfm94xc3cgu9f2xjeznyh07futzhbb8xx1p95s4fkmkuj49h61w6ai5zr5ik3rbn1e6pd6vnyj4pfvvtobl3z6um2w0ngvvlke3unnve0otrafkw7g1chxp231c8pfm4b1n5d6ffr2k5g0mft4f35jyxjlty2klshh1dq4735wivyg7i5im82p8tkxre4cshtz2095fdma08ueewryfhcgrh474hljke1jo9367ypnuoja2j == \y\z\g\1\7\l\r\w\l\4\q\8\a\5\g\o\2\v\3\q\q\y\t\u\1\j\a\y\3\z\h\k\9\7\p\n\c\7\j\s\3\s\r\5\s\d\t\n\f\0\n\p\4\5\k\g\q\3\t\z\o\6\8\5\k\6\n\u\m\w\v\3\i\i\3\0\p\j\c\d\s\u\7\1\y\e\c\f\a\f\t\w\m\3\v\x\5\c\d\i\d\q\e\1\s\e\6\n\q\l\7\3\8\j\7\6\i\a\b\n\9\o\u\i\a\g\6\0\e\9\q\0\x\b\5\k\g\2\2\1\p\v\b\f\3\b\5\r\s\d\v\h\5\a\d\j\9\o\b\z\i\1\2\l\c\f\7\r\7\6\t\z\q\e\p\l\q\8\d\m\2\g\7\2\4\x\1\h\a\m\a\g\m\t\x\c\y\m\z\4\2\n\q\v\3\y\7\x\l\h\j\r\j\w\8\3\0\f\t\i\y\o\w\a\j\3\y\o\u\u\q\9\w\l\o\q\7\9\q\u\h\4\o\h\f\b\z\e\6\l\s\w\x\2\h\5\3\g\8\c\s\w\0\a\g\9\w\4\8\2\b\9\t\f\m\9\4\x\c\3\c\g\u\9\f\2\x\j\e\z\n\y\h\0\7\f\u\t\z\h\b\b\8\x\x\1\p\9\5\s\4\f\k\m\k\u\j\4\9\h\6\1\w\6\a\i\5\z\r\5\i\k\3\r\b\n\1\e\6\p\d\6\v\n\y\j\4\p\f\v\v\t\o\b\l\3\z\6\u\m\2\w\0\n\g\v\v\l\k\e\3\u\n\n\v\e\0\o\t\r\a\f\k\w\7\g\1\c\h\x\p\2\3\1\c\8\p\f\m\4\b\1\n\5\d\6\f\f\r\2\k\5\g\0\m\f\t\4\f\3\5\j\y\x\j\l\t\y\2\k\l\s\h\h\1\d\q\4\7\3\5\w\i\v\y\g\7\i\5\i\m\8\2\p\8\t\k\x\r\e\4\c\s\h\t\z\2\0\9\5\f\d\m\a\0\8\u\e\e\w\r\y\f\h\c\g\r\h\4\7\4\h\l\j\k\e\1\j\o\9\3\6\7\y\p\n\u\o\j\a\2\j ]] 00:08:59.326 08:46:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:59.326 08:46:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:59.584 [2024-09-28 08:46:37.342790] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:59.584 [2024-09-28 08:46:37.342990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62454 ] 00:08:59.584 [2024-09-28 08:46:37.496983] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.843 [2024-09-28 08:46:37.650408] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.843 [2024-09-28 08:46:37.804418] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.039  Copying: 512/512 [B] (average 500 kBps) 00:09:01.039 00:09:01.039 08:46:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yzg17lrwl4q8a5go2v3qqytu1jay3zhk97pnc7js3sr5sdtnf0np45kgq3tzo685k6numwv3ii30pjcdsu71yecfaftwm3vx5cdidqe1se6nql738j76iabn9ouiag60e9q0xb5kg221pvbf3b5rsdvh5adj9obzi12lcf7r76tzqeplq8dm2g724x1hamagmtxcymz42nqv3y7xlhjrjw830ftiyowaj3youuq9wloq79quh4ohfbze6lswx2h53g8csw0ag9w482b9tfm94xc3cgu9f2xjeznyh07futzhbb8xx1p95s4fkmkuj49h61w6ai5zr5ik3rbn1e6pd6vnyj4pfvvtobl3z6um2w0ngvvlke3unnve0otrafkw7g1chxp231c8pfm4b1n5d6ffr2k5g0mft4f35jyxjlty2klshh1dq4735wivyg7i5im82p8tkxre4cshtz2095fdma08ueewryfhcgrh474hljke1jo9367ypnuoja2j == \y\z\g\1\7\l\r\w\l\4\q\8\a\5\g\o\2\v\3\q\q\y\t\u\1\j\a\y\3\z\h\k\9\7\p\n\c\7\j\s\3\s\r\5\s\d\t\n\f\0\n\p\4\5\k\g\q\3\t\z\o\6\8\5\k\6\n\u\m\w\v\3\i\i\3\0\p\j\c\d\s\u\7\1\y\e\c\f\a\f\t\w\m\3\v\x\5\c\d\i\d\q\e\1\s\e\6\n\q\l\7\3\8\j\7\6\i\a\b\n\9\o\u\i\a\g\6\0\e\9\q\0\x\b\5\k\g\2\2\1\p\v\b\f\3\b\5\r\s\d\v\h\5\a\d\j\9\o\b\z\i\1\2\l\c\f\7\r\7\6\t\z\q\e\p\l\q\8\d\m\2\g\7\2\4\x\1\h\a\m\a\g\m\t\x\c\y\m\z\4\2\n\q\v\3\y\7\x\l\h\j\r\j\w\8\3\0\f\t\i\y\o\w\a\j\3\y\o\u\u\q\9\w\l\o\q\7\9\q\u\h\4\o\h\f\b\z\e\6\l\s\w\x\2\h\5\3\g\8\c\s\w\0\a\g\9\w\4\8\2\b\9\t\f\m\9\4\x\c\3\c\g\u\9\f\2\x\j\e\z\n\y\h\0\7\f\u\t\z\h\b\b\8\x\x\1\p\9\5\s\4\f\k\m\k\u\j\4\9\h\6\1\w\6\a\i\5\z\r\5\i\k\3\r\b\n\1\e\6\p\d\6\v\n\y\j\4\p\f\v\v\t\o\b\l\3\z\6\u\m\2\w\0\n\g\v\v\l\k\e\3\u\n\n\v\e\0\o\t\r\a\f\k\w\7\g\1\c\h\x\p\2\3\1\c\8\p\f\m\4\b\1\n\5\d\6\f\f\r\2\k\5\g\0\m\f\t\4\f\3\5\j\y\x\j\l\t\y\2\k\l\s\h\h\1\d\q\4\7\3\5\w\i\v\y\g\7\i\5\i\m\8\2\p\8\t\k\x\r\e\4\c\s\h\t\z\2\0\9\5\f\d\m\a\0\8\u\e\e\w\r\y\f\h\c\g\r\h\4\7\4\h\l\j\k\e\1\j\o\9\3\6\7\y\p\n\u\o\j\a\2\j ]] 00:09:01.039 08:46:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:01.039 08:46:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:01.039 [2024-09-28 08:46:38.951982] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:01.039 [2024-09-28 08:46:38.952194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62468 ] 00:09:01.300 [2024-09-28 08:46:39.114401] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.300 [2024-09-28 08:46:39.272942] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.559 [2024-09-28 08:46:39.418899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.496  Copying: 512/512 [B] (average 250 kBps) 00:09:02.496 00:09:02.756 08:46:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yzg17lrwl4q8a5go2v3qqytu1jay3zhk97pnc7js3sr5sdtnf0np45kgq3tzo685k6numwv3ii30pjcdsu71yecfaftwm3vx5cdidqe1se6nql738j76iabn9ouiag60e9q0xb5kg221pvbf3b5rsdvh5adj9obzi12lcf7r76tzqeplq8dm2g724x1hamagmtxcymz42nqv3y7xlhjrjw830ftiyowaj3youuq9wloq79quh4ohfbze6lswx2h53g8csw0ag9w482b9tfm94xc3cgu9f2xjeznyh07futzhbb8xx1p95s4fkmkuj49h61w6ai5zr5ik3rbn1e6pd6vnyj4pfvvtobl3z6um2w0ngvvlke3unnve0otrafkw7g1chxp231c8pfm4b1n5d6ffr2k5g0mft4f35jyxjlty2klshh1dq4735wivyg7i5im82p8tkxre4cshtz2095fdma08ueewryfhcgrh474hljke1jo9367ypnuoja2j == \y\z\g\1\7\l\r\w\l\4\q\8\a\5\g\o\2\v\3\q\q\y\t\u\1\j\a\y\3\z\h\k\9\7\p\n\c\7\j\s\3\s\r\5\s\d\t\n\f\0\n\p\4\5\k\g\q\3\t\z\o\6\8\5\k\6\n\u\m\w\v\3\i\i\3\0\p\j\c\d\s\u\7\1\y\e\c\f\a\f\t\w\m\3\v\x\5\c\d\i\d\q\e\1\s\e\6\n\q\l\7\3\8\j\7\6\i\a\b\n\9\o\u\i\a\g\6\0\e\9\q\0\x\b\5\k\g\2\2\1\p\v\b\f\3\b\5\r\s\d\v\h\5\a\d\j\9\o\b\z\i\1\2\l\c\f\7\r\7\6\t\z\q\e\p\l\q\8\d\m\2\g\7\2\4\x\1\h\a\m\a\g\m\t\x\c\y\m\z\4\2\n\q\v\3\y\7\x\l\h\j\r\j\w\8\3\0\f\t\i\y\o\w\a\j\3\y\o\u\u\q\9\w\l\o\q\7\9\q\u\h\4\o\h\f\b\z\e\6\l\s\w\x\2\h\5\3\g\8\c\s\w\0\a\g\9\w\4\8\2\b\9\t\f\m\9\4\x\c\3\c\g\u\9\f\2\x\j\e\z\n\y\h\0\7\f\u\t\z\h\b\b\8\x\x\1\p\9\5\s\4\f\k\m\k\u\j\4\9\h\6\1\w\6\a\i\5\z\r\5\i\k\3\r\b\n\1\e\6\p\d\6\v\n\y\j\4\p\f\v\v\t\o\b\l\3\z\6\u\m\2\w\0\n\g\v\v\l\k\e\3\u\n\n\v\e\0\o\t\r\a\f\k\w\7\g\1\c\h\x\p\2\3\1\c\8\p\f\m\4\b\1\n\5\d\6\f\f\r\2\k\5\g\0\m\f\t\4\f\3\5\j\y\x\j\l\t\y\2\k\l\s\h\h\1\d\q\4\7\3\5\w\i\v\y\g\7\i\5\i\m\8\2\p\8\t\k\x\r\e\4\c\s\h\t\z\2\0\9\5\f\d\m\a\0\8\u\e\e\w\r\y\f\h\c\g\r\h\4\7\4\h\l\j\k\e\1\j\o\9\3\6\7\y\p\n\u\o\j\a\2\j ]] 00:09:02.756 08:46:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:02.756 08:46:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:02.756 [2024-09-28 08:46:40.576552] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:02.756 [2024-09-28 08:46:40.576691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62493 ] 00:09:02.756 [2024-09-28 08:46:40.735257] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.016 [2024-09-28 08:46:40.893790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.275 [2024-09-28 08:46:41.046576] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:04.213  Copying: 512/512 [B] (average 250 kBps) 00:09:04.214 00:09:04.214 08:46:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ yzg17lrwl4q8a5go2v3qqytu1jay3zhk97pnc7js3sr5sdtnf0np45kgq3tzo685k6numwv3ii30pjcdsu71yecfaftwm3vx5cdidqe1se6nql738j76iabn9ouiag60e9q0xb5kg221pvbf3b5rsdvh5adj9obzi12lcf7r76tzqeplq8dm2g724x1hamagmtxcymz42nqv3y7xlhjrjw830ftiyowaj3youuq9wloq79quh4ohfbze6lswx2h53g8csw0ag9w482b9tfm94xc3cgu9f2xjeznyh07futzhbb8xx1p95s4fkmkuj49h61w6ai5zr5ik3rbn1e6pd6vnyj4pfvvtobl3z6um2w0ngvvlke3unnve0otrafkw7g1chxp231c8pfm4b1n5d6ffr2k5g0mft4f35jyxjlty2klshh1dq4735wivyg7i5im82p8tkxre4cshtz2095fdma08ueewryfhcgrh474hljke1jo9367ypnuoja2j == \y\z\g\1\7\l\r\w\l\4\q\8\a\5\g\o\2\v\3\q\q\y\t\u\1\j\a\y\3\z\h\k\9\7\p\n\c\7\j\s\3\s\r\5\s\d\t\n\f\0\n\p\4\5\k\g\q\3\t\z\o\6\8\5\k\6\n\u\m\w\v\3\i\i\3\0\p\j\c\d\s\u\7\1\y\e\c\f\a\f\t\w\m\3\v\x\5\c\d\i\d\q\e\1\s\e\6\n\q\l\7\3\8\j\7\6\i\a\b\n\9\o\u\i\a\g\6\0\e\9\q\0\x\b\5\k\g\2\2\1\p\v\b\f\3\b\5\r\s\d\v\h\5\a\d\j\9\o\b\z\i\1\2\l\c\f\7\r\7\6\t\z\q\e\p\l\q\8\d\m\2\g\7\2\4\x\1\h\a\m\a\g\m\t\x\c\y\m\z\4\2\n\q\v\3\y\7\x\l\h\j\r\j\w\8\3\0\f\t\i\y\o\w\a\j\3\y\o\u\u\q\9\w\l\o\q\7\9\q\u\h\4\o\h\f\b\z\e\6\l\s\w\x\2\h\5\3\g\8\c\s\w\0\a\g\9\w\4\8\2\b\9\t\f\m\9\4\x\c\3\c\g\u\9\f\2\x\j\e\z\n\y\h\0\7\f\u\t\z\h\b\b\8\x\x\1\p\9\5\s\4\f\k\m\k\u\j\4\9\h\6\1\w\6\a\i\5\z\r\5\i\k\3\r\b\n\1\e\6\p\d\6\v\n\y\j\4\p\f\v\v\t\o\b\l\3\z\6\u\m\2\w\0\n\g\v\v\l\k\e\3\u\n\n\v\e\0\o\t\r\a\f\k\w\7\g\1\c\h\x\p\2\3\1\c\8\p\f\m\4\b\1\n\5\d\6\f\f\r\2\k\5\g\0\m\f\t\4\f\3\5\j\y\x\j\l\t\y\2\k\l\s\h\h\1\d\q\4\7\3\5\w\i\v\y\g\7\i\5\i\m\8\2\p\8\t\k\x\r\e\4\c\s\h\t\z\2\0\9\5\f\d\m\a\0\8\u\e\e\w\r\y\f\h\c\g\r\h\4\7\4\h\l\j\k\e\1\j\o\9\3\6\7\y\p\n\u\o\j\a\2\j ]] 00:09:04.214 08:46:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:09:04.214 08:46:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:09:04.214 08:46:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:09:04.214 08:46:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:04.214 08:46:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:04.214 08:46:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:09:04.214 [2024-09-28 08:46:42.173017] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:04.214 [2024-09-28 08:46:42.173148] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62513 ] 00:09:04.473 [2024-09-28 08:46:42.330774] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.733 [2024-09-28 08:46:42.482249] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.733 [2024-09-28 08:46:42.629455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:05.670  Copying: 512/512 [B] (average 500 kBps) 00:09:05.670 00:09:05.929 08:46:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qyoxrgpefmsly78bsi3nsa3y2mec2cudk28x87yqz90b6v2fktaex4xsay7dpgoau22xmng68yzmtxnhymn6k4y0asez1jlexw59avewc90601fxj1chhy13nl720wbmldsi2wg2ha9aal2zksbdb26tqobkq0qe2drxiwtcartswt7j7584823nj7ctope4falmx2a5j6v1d42krt1nt86lxlxl2f0a5devkrnqa1vga1ojhrhplwk0hn7crpkmyf9erd1epwfmjhhncjxant8vqfs01urlvsdrmlx5unzgrsctsrv1ofmkylxc2q1k836qhtzrz9gkyk8xxib6m7xysaeak4ra9grv166w8dgbiirdw8v9ex7p4rdt548yz5wu1su036iqe7q2nlwul54gvbc8yaygqowgaysh5aw46uw7t82vs1fltlg6wztvtjln1wl6dxl029oicxhg72ughel6vqyp7juip3ck0mzu0qug484ygmxk3a5tgvbk == \q\y\o\x\r\g\p\e\f\m\s\l\y\7\8\b\s\i\3\n\s\a\3\y\2\m\e\c\2\c\u\d\k\2\8\x\8\7\y\q\z\9\0\b\6\v\2\f\k\t\a\e\x\4\x\s\a\y\7\d\p\g\o\a\u\2\2\x\m\n\g\6\8\y\z\m\t\x\n\h\y\m\n\6\k\4\y\0\a\s\e\z\1\j\l\e\x\w\5\9\a\v\e\w\c\9\0\6\0\1\f\x\j\1\c\h\h\y\1\3\n\l\7\2\0\w\b\m\l\d\s\i\2\w\g\2\h\a\9\a\a\l\2\z\k\s\b\d\b\2\6\t\q\o\b\k\q\0\q\e\2\d\r\x\i\w\t\c\a\r\t\s\w\t\7\j\7\5\8\4\8\2\3\n\j\7\c\t\o\p\e\4\f\a\l\m\x\2\a\5\j\6\v\1\d\4\2\k\r\t\1\n\t\8\6\l\x\l\x\l\2\f\0\a\5\d\e\v\k\r\n\q\a\1\v\g\a\1\o\j\h\r\h\p\l\w\k\0\h\n\7\c\r\p\k\m\y\f\9\e\r\d\1\e\p\w\f\m\j\h\h\n\c\j\x\a\n\t\8\v\q\f\s\0\1\u\r\l\v\s\d\r\m\l\x\5\u\n\z\g\r\s\c\t\s\r\v\1\o\f\m\k\y\l\x\c\2\q\1\k\8\3\6\q\h\t\z\r\z\9\g\k\y\k\8\x\x\i\b\6\m\7\x\y\s\a\e\a\k\4\r\a\9\g\r\v\1\6\6\w\8\d\g\b\i\i\r\d\w\8\v\9\e\x\7\p\4\r\d\t\5\4\8\y\z\5\w\u\1\s\u\0\3\6\i\q\e\7\q\2\n\l\w\u\l\5\4\g\v\b\c\8\y\a\y\g\q\o\w\g\a\y\s\h\5\a\w\4\6\u\w\7\t\8\2\v\s\1\f\l\t\l\g\6\w\z\t\v\t\j\l\n\1\w\l\6\d\x\l\0\2\9\o\i\c\x\h\g\7\2\u\g\h\e\l\6\v\q\y\p\7\j\u\i\p\3\c\k\0\m\z\u\0\q\u\g\4\8\4\y\g\m\x\k\3\a\5\t\g\v\b\k ]] 00:09:05.929 08:46:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:05.929 08:46:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:09:05.929 [2024-09-28 08:46:43.749168] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:05.929 [2024-09-28 08:46:43.749354] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62532 ] 00:09:05.929 [2024-09-28 08:46:43.903402] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.188 [2024-09-28 08:46:44.054412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.447 [2024-09-28 08:46:44.208223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.384  Copying: 512/512 [B] (average 500 kBps) 00:09:07.384 00:09:07.384 08:46:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qyoxrgpefmsly78bsi3nsa3y2mec2cudk28x87yqz90b6v2fktaex4xsay7dpgoau22xmng68yzmtxnhymn6k4y0asez1jlexw59avewc90601fxj1chhy13nl720wbmldsi2wg2ha9aal2zksbdb26tqobkq0qe2drxiwtcartswt7j7584823nj7ctope4falmx2a5j6v1d42krt1nt86lxlxl2f0a5devkrnqa1vga1ojhrhplwk0hn7crpkmyf9erd1epwfmjhhncjxant8vqfs01urlvsdrmlx5unzgrsctsrv1ofmkylxc2q1k836qhtzrz9gkyk8xxib6m7xysaeak4ra9grv166w8dgbiirdw8v9ex7p4rdt548yz5wu1su036iqe7q2nlwul54gvbc8yaygqowgaysh5aw46uw7t82vs1fltlg6wztvtjln1wl6dxl029oicxhg72ughel6vqyp7juip3ck0mzu0qug484ygmxk3a5tgvbk == \q\y\o\x\r\g\p\e\f\m\s\l\y\7\8\b\s\i\3\n\s\a\3\y\2\m\e\c\2\c\u\d\k\2\8\x\8\7\y\q\z\9\0\b\6\v\2\f\k\t\a\e\x\4\x\s\a\y\7\d\p\g\o\a\u\2\2\x\m\n\g\6\8\y\z\m\t\x\n\h\y\m\n\6\k\4\y\0\a\s\e\z\1\j\l\e\x\w\5\9\a\v\e\w\c\9\0\6\0\1\f\x\j\1\c\h\h\y\1\3\n\l\7\2\0\w\b\m\l\d\s\i\2\w\g\2\h\a\9\a\a\l\2\z\k\s\b\d\b\2\6\t\q\o\b\k\q\0\q\e\2\d\r\x\i\w\t\c\a\r\t\s\w\t\7\j\7\5\8\4\8\2\3\n\j\7\c\t\o\p\e\4\f\a\l\m\x\2\a\5\j\6\v\1\d\4\2\k\r\t\1\n\t\8\6\l\x\l\x\l\2\f\0\a\5\d\e\v\k\r\n\q\a\1\v\g\a\1\o\j\h\r\h\p\l\w\k\0\h\n\7\c\r\p\k\m\y\f\9\e\r\d\1\e\p\w\f\m\j\h\h\n\c\j\x\a\n\t\8\v\q\f\s\0\1\u\r\l\v\s\d\r\m\l\x\5\u\n\z\g\r\s\c\t\s\r\v\1\o\f\m\k\y\l\x\c\2\q\1\k\8\3\6\q\h\t\z\r\z\9\g\k\y\k\8\x\x\i\b\6\m\7\x\y\s\a\e\a\k\4\r\a\9\g\r\v\1\6\6\w\8\d\g\b\i\i\r\d\w\8\v\9\e\x\7\p\4\r\d\t\5\4\8\y\z\5\w\u\1\s\u\0\3\6\i\q\e\7\q\2\n\l\w\u\l\5\4\g\v\b\c\8\y\a\y\g\q\o\w\g\a\y\s\h\5\a\w\4\6\u\w\7\t\8\2\v\s\1\f\l\t\l\g\6\w\z\t\v\t\j\l\n\1\w\l\6\d\x\l\0\2\9\o\i\c\x\h\g\7\2\u\g\h\e\l\6\v\q\y\p\7\j\u\i\p\3\c\k\0\m\z\u\0\q\u\g\4\8\4\y\g\m\x\k\3\a\5\t\g\v\b\k ]] 00:09:07.384 08:46:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:07.384 08:46:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:09:07.384 [2024-09-28 08:46:45.326699] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:07.384 [2024-09-28 08:46:45.326867] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62552 ] 00:09:07.644 [2024-09-28 08:46:45.483397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.644 [2024-09-28 08:46:45.631416] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.904 [2024-09-28 08:46:45.775974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:08.847  Copying: 512/512 [B] (average 166 kBps) 00:09:08.847 00:09:08.847 08:46:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qyoxrgpefmsly78bsi3nsa3y2mec2cudk28x87yqz90b6v2fktaex4xsay7dpgoau22xmng68yzmtxnhymn6k4y0asez1jlexw59avewc90601fxj1chhy13nl720wbmldsi2wg2ha9aal2zksbdb26tqobkq0qe2drxiwtcartswt7j7584823nj7ctope4falmx2a5j6v1d42krt1nt86lxlxl2f0a5devkrnqa1vga1ojhrhplwk0hn7crpkmyf9erd1epwfmjhhncjxant8vqfs01urlvsdrmlx5unzgrsctsrv1ofmkylxc2q1k836qhtzrz9gkyk8xxib6m7xysaeak4ra9grv166w8dgbiirdw8v9ex7p4rdt548yz5wu1su036iqe7q2nlwul54gvbc8yaygqowgaysh5aw46uw7t82vs1fltlg6wztvtjln1wl6dxl029oicxhg72ughel6vqyp7juip3ck0mzu0qug484ygmxk3a5tgvbk == \q\y\o\x\r\g\p\e\f\m\s\l\y\7\8\b\s\i\3\n\s\a\3\y\2\m\e\c\2\c\u\d\k\2\8\x\8\7\y\q\z\9\0\b\6\v\2\f\k\t\a\e\x\4\x\s\a\y\7\d\p\g\o\a\u\2\2\x\m\n\g\6\8\y\z\m\t\x\n\h\y\m\n\6\k\4\y\0\a\s\e\z\1\j\l\e\x\w\5\9\a\v\e\w\c\9\0\6\0\1\f\x\j\1\c\h\h\y\1\3\n\l\7\2\0\w\b\m\l\d\s\i\2\w\g\2\h\a\9\a\a\l\2\z\k\s\b\d\b\2\6\t\q\o\b\k\q\0\q\e\2\d\r\x\i\w\t\c\a\r\t\s\w\t\7\j\7\5\8\4\8\2\3\n\j\7\c\t\o\p\e\4\f\a\l\m\x\2\a\5\j\6\v\1\d\4\2\k\r\t\1\n\t\8\6\l\x\l\x\l\2\f\0\a\5\d\e\v\k\r\n\q\a\1\v\g\a\1\o\j\h\r\h\p\l\w\k\0\h\n\7\c\r\p\k\m\y\f\9\e\r\d\1\e\p\w\f\m\j\h\h\n\c\j\x\a\n\t\8\v\q\f\s\0\1\u\r\l\v\s\d\r\m\l\x\5\u\n\z\g\r\s\c\t\s\r\v\1\o\f\m\k\y\l\x\c\2\q\1\k\8\3\6\q\h\t\z\r\z\9\g\k\y\k\8\x\x\i\b\6\m\7\x\y\s\a\e\a\k\4\r\a\9\g\r\v\1\6\6\w\8\d\g\b\i\i\r\d\w\8\v\9\e\x\7\p\4\r\d\t\5\4\8\y\z\5\w\u\1\s\u\0\3\6\i\q\e\7\q\2\n\l\w\u\l\5\4\g\v\b\c\8\y\a\y\g\q\o\w\g\a\y\s\h\5\a\w\4\6\u\w\7\t\8\2\v\s\1\f\l\t\l\g\6\w\z\t\v\t\j\l\n\1\w\l\6\d\x\l\0\2\9\o\i\c\x\h\g\7\2\u\g\h\e\l\6\v\q\y\p\7\j\u\i\p\3\c\k\0\m\z\u\0\q\u\g\4\8\4\y\g\m\x\k\3\a\5\t\g\v\b\k ]] 00:09:09.107 08:46:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:09:09.107 08:46:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:09:09.107 [2024-09-28 08:46:46.922100] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:09.107 [2024-09-28 08:46:46.922320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62571 ] 00:09:09.107 [2024-09-28 08:46:47.073756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.366 [2024-09-28 08:46:47.236576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.625 [2024-09-28 08:46:47.389062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.562  Copying: 512/512 [B] (average 500 kBps) 00:09:10.562 00:09:10.562 ************************************ 00:09:10.562 END TEST dd_flags_misc_forced_aio 00:09:10.562 ************************************ 00:09:10.562 08:46:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ qyoxrgpefmsly78bsi3nsa3y2mec2cudk28x87yqz90b6v2fktaex4xsay7dpgoau22xmng68yzmtxnhymn6k4y0asez1jlexw59avewc90601fxj1chhy13nl720wbmldsi2wg2ha9aal2zksbdb26tqobkq0qe2drxiwtcartswt7j7584823nj7ctope4falmx2a5j6v1d42krt1nt86lxlxl2f0a5devkrnqa1vga1ojhrhplwk0hn7crpkmyf9erd1epwfmjhhncjxant8vqfs01urlvsdrmlx5unzgrsctsrv1ofmkylxc2q1k836qhtzrz9gkyk8xxib6m7xysaeak4ra9grv166w8dgbiirdw8v9ex7p4rdt548yz5wu1su036iqe7q2nlwul54gvbc8yaygqowgaysh5aw46uw7t82vs1fltlg6wztvtjln1wl6dxl029oicxhg72ughel6vqyp7juip3ck0mzu0qug484ygmxk3a5tgvbk == \q\y\o\x\r\g\p\e\f\m\s\l\y\7\8\b\s\i\3\n\s\a\3\y\2\m\e\c\2\c\u\d\k\2\8\x\8\7\y\q\z\9\0\b\6\v\2\f\k\t\a\e\x\4\x\s\a\y\7\d\p\g\o\a\u\2\2\x\m\n\g\6\8\y\z\m\t\x\n\h\y\m\n\6\k\4\y\0\a\s\e\z\1\j\l\e\x\w\5\9\a\v\e\w\c\9\0\6\0\1\f\x\j\1\c\h\h\y\1\3\n\l\7\2\0\w\b\m\l\d\s\i\2\w\g\2\h\a\9\a\a\l\2\z\k\s\b\d\b\2\6\t\q\o\b\k\q\0\q\e\2\d\r\x\i\w\t\c\a\r\t\s\w\t\7\j\7\5\8\4\8\2\3\n\j\7\c\t\o\p\e\4\f\a\l\m\x\2\a\5\j\6\v\1\d\4\2\k\r\t\1\n\t\8\6\l\x\l\x\l\2\f\0\a\5\d\e\v\k\r\n\q\a\1\v\g\a\1\o\j\h\r\h\p\l\w\k\0\h\n\7\c\r\p\k\m\y\f\9\e\r\d\1\e\p\w\f\m\j\h\h\n\c\j\x\a\n\t\8\v\q\f\s\0\1\u\r\l\v\s\d\r\m\l\x\5\u\n\z\g\r\s\c\t\s\r\v\1\o\f\m\k\y\l\x\c\2\q\1\k\8\3\6\q\h\t\z\r\z\9\g\k\y\k\8\x\x\i\b\6\m\7\x\y\s\a\e\a\k\4\r\a\9\g\r\v\1\6\6\w\8\d\g\b\i\i\r\d\w\8\v\9\e\x\7\p\4\r\d\t\5\4\8\y\z\5\w\u\1\s\u\0\3\6\i\q\e\7\q\2\n\l\w\u\l\5\4\g\v\b\c\8\y\a\y\g\q\o\w\g\a\y\s\h\5\a\w\4\6\u\w\7\t\8\2\v\s\1\f\l\t\l\g\6\w\z\t\v\t\j\l\n\1\w\l\6\d\x\l\0\2\9\o\i\c\x\h\g\7\2\u\g\h\e\l\6\v\q\y\p\7\j\u\i\p\3\c\k\0\m\z\u\0\q\u\g\4\8\4\y\g\m\x\k\3\a\5\t\g\v\b\k ]] 00:09:10.562 00:09:10.562 real 0m12.775s 00:09:10.562 user 0m10.381s 00:09:10.562 sys 0m1.384s 00:09:10.562 08:46:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.562 08:46:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:09:10.562 08:46:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:09:10.562 08:46:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:09:10.562 08:46:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:09:10.562 00:09:10.562 real 0m55.663s 00:09:10.562 user 0m43.348s 00:09:10.562 sys 0m14.126s 00:09:10.562 08:46:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.562 08:46:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:09:10.562 ************************************ 00:09:10.562 END TEST spdk_dd_posix 00:09:10.562 ************************************ 00:09:10.562 08:46:48 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:10.562 08:46:48 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:10.562 08:46:48 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.562 08:46:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:10.562 ************************************ 00:09:10.562 START TEST spdk_dd_malloc 00:09:10.562 ************************************ 00:09:10.562 08:46:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:09:10.822 * Looking for test storage... 00:09:10.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:10.822 08:46:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:10.822 08:46:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:10.822 08:46:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:10.822 08:46:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:10.822 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.822 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:10.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.823 --rc genhtml_branch_coverage=1 00:09:10.823 --rc genhtml_function_coverage=1 00:09:10.823 --rc genhtml_legend=1 00:09:10.823 --rc geninfo_all_blocks=1 00:09:10.823 --rc geninfo_unexecuted_blocks=1 00:09:10.823 00:09:10.823 ' 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:10.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.823 --rc genhtml_branch_coverage=1 00:09:10.823 --rc genhtml_function_coverage=1 00:09:10.823 --rc genhtml_legend=1 00:09:10.823 --rc geninfo_all_blocks=1 00:09:10.823 --rc geninfo_unexecuted_blocks=1 00:09:10.823 00:09:10.823 ' 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:10.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.823 --rc genhtml_branch_coverage=1 00:09:10.823 --rc genhtml_function_coverage=1 00:09:10.823 --rc genhtml_legend=1 00:09:10.823 --rc geninfo_all_blocks=1 00:09:10.823 --rc geninfo_unexecuted_blocks=1 00:09:10.823 00:09:10.823 ' 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:10.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.823 --rc genhtml_branch_coverage=1 00:09:10.823 --rc genhtml_function_coverage=1 00:09:10.823 --rc genhtml_legend=1 00:09:10.823 --rc geninfo_all_blocks=1 00:09:10.823 --rc geninfo_unexecuted_blocks=1 00:09:10.823 00:09:10.823 ' 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:10.823 ************************************ 00:09:10.823 START TEST dd_malloc_copy 00:09:10.823 ************************************ 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:10.823 08:46:48 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:10.823 { 00:09:10.823 "subsystems": [ 00:09:10.823 { 00:09:10.823 "subsystem": "bdev", 00:09:10.823 "config": [ 00:09:10.823 { 00:09:10.823 "params": { 00:09:10.823 "block_size": 512, 00:09:10.823 "num_blocks": 1048576, 00:09:10.823 "name": "malloc0" 00:09:10.823 }, 00:09:10.823 "method": "bdev_malloc_create" 00:09:10.823 }, 00:09:10.823 { 00:09:10.823 "params": { 00:09:10.823 "block_size": 512, 00:09:10.823 "num_blocks": 1048576, 00:09:10.823 "name": "malloc1" 00:09:10.823 }, 00:09:10.823 "method": "bdev_malloc_create" 00:09:10.823 }, 00:09:10.823 { 00:09:10.823 "method": "bdev_wait_for_examine" 00:09:10.823 } 00:09:10.823 ] 00:09:10.823 } 00:09:10.823 ] 00:09:10.823 } 00:09:10.823 [2024-09-28 08:46:48.805665] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:10.823 [2024-09-28 08:46:48.805897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62665 ] 00:09:11.083 [2024-09-28 08:46:48.981825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.342 [2024-09-28 08:46:49.132780] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.342 [2024-09-28 08:46:49.291801] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.440  Copying: 190/512 [MB] (190 MBps) Copying: 371/512 [MB] (181 MBps) Copying: 512/512 [MB] (average 183 MBps) 00:09:18.440 00:09:18.440 08:46:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:09:18.440 08:46:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:09:18.440 08:46:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:18.440 08:46:56 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:18.440 { 00:09:18.440 "subsystems": [ 00:09:18.440 { 00:09:18.440 "subsystem": "bdev", 00:09:18.440 "config": [ 00:09:18.440 { 00:09:18.440 "params": { 00:09:18.440 "block_size": 512, 00:09:18.440 "num_blocks": 1048576, 00:09:18.440 "name": "malloc0" 00:09:18.440 }, 00:09:18.440 "method": "bdev_malloc_create" 00:09:18.440 }, 00:09:18.440 { 00:09:18.440 "params": { 00:09:18.440 "block_size": 512, 00:09:18.440 "num_blocks": 1048576, 00:09:18.440 "name": "malloc1" 00:09:18.440 }, 00:09:18.440 "method": "bdev_malloc_create" 00:09:18.440 }, 00:09:18.440 { 00:09:18.440 "method": "bdev_wait_for_examine" 00:09:18.440 } 00:09:18.440 ] 00:09:18.440 } 00:09:18.440 ] 00:09:18.440 } 00:09:18.440 [2024-09-28 08:46:56.099317] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:18.440 [2024-09-28 08:46:56.099451] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62747 ] 00:09:18.440 [2024-09-28 08:46:56.260339] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.440 [2024-09-28 08:46:56.427021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.699 [2024-09-28 08:46:56.586858] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:26.224  Copying: 169/512 [MB] (169 MBps) Copying: 342/512 [MB] (173 MBps) Copying: 512/512 [MB] (average 172 MBps) 00:09:26.224 00:09:26.224 00:09:26.224 real 0m14.943s 00:09:26.224 user 0m13.932s 00:09:26.224 sys 0m0.826s 00:09:26.224 ************************************ 00:09:26.224 08:47:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.224 08:47:03 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:26.224 END TEST dd_malloc_copy 00:09:26.224 ************************************ 00:09:26.224 00:09:26.224 real 0m15.151s 00:09:26.224 user 0m14.035s 00:09:26.224 sys 0m0.936s 00:09:26.224 08:47:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.224 ************************************ 00:09:26.224 END TEST spdk_dd_malloc 00:09:26.224 08:47:03 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:26.224 ************************************ 00:09:26.224 08:47:03 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:26.224 08:47:03 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:26.224 08:47:03 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.224 08:47:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:26.224 ************************************ 00:09:26.224 START TEST spdk_dd_bdev_to_bdev 00:09:26.224 ************************************ 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:26.224 * Looking for test storage... 00:09:26.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:26.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.224 --rc genhtml_branch_coverage=1 00:09:26.224 --rc genhtml_function_coverage=1 00:09:26.224 --rc genhtml_legend=1 00:09:26.224 --rc geninfo_all_blocks=1 00:09:26.224 --rc geninfo_unexecuted_blocks=1 00:09:26.224 00:09:26.224 ' 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:26.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.224 --rc genhtml_branch_coverage=1 00:09:26.224 --rc genhtml_function_coverage=1 00:09:26.224 --rc genhtml_legend=1 00:09:26.224 --rc geninfo_all_blocks=1 00:09:26.224 --rc geninfo_unexecuted_blocks=1 00:09:26.224 00:09:26.224 ' 00:09:26.224 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:26.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.224 --rc genhtml_branch_coverage=1 00:09:26.225 --rc genhtml_function_coverage=1 00:09:26.225 --rc genhtml_legend=1 00:09:26.225 --rc geninfo_all_blocks=1 00:09:26.225 --rc geninfo_unexecuted_blocks=1 00:09:26.225 00:09:26.225 ' 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:26.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.225 --rc genhtml_branch_coverage=1 00:09:26.225 --rc genhtml_function_coverage=1 00:09:26.225 --rc genhtml_legend=1 00:09:26.225 --rc geninfo_all_blocks=1 00:09:26.225 --rc geninfo_unexecuted_blocks=1 00:09:26.225 00:09:26.225 ' 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:26.225 ************************************ 00:09:26.225 START TEST dd_inflate_file 00:09:26.225 ************************************ 00:09:26.225 08:47:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:26.225 [2024-09-28 08:47:04.035943] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:26.225 [2024-09-28 08:47:04.036108] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62910 ] 00:09:26.488 [2024-09-28 08:47:04.202808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.488 [2024-09-28 08:47:04.362787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.748 [2024-09-28 08:47:04.517754] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:27.686  Copying: 64/64 [MB] (average 1600 MBps) 00:09:27.686 00:09:27.686 00:09:27.686 real 0m1.652s 00:09:27.686 user 0m1.329s 00:09:27.686 sys 0m0.883s 00:09:27.686 08:47:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.686 08:47:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:09:27.686 ************************************ 00:09:27.686 END TEST dd_inflate_file 00:09:27.686 ************************************ 00:09:27.686 08:47:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:09:27.686 08:47:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:09:27.686 08:47:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:27.686 08:47:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:09:27.686 08:47:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:27.686 08:47:05 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:27.686 08:47:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.686 08:47:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:27.686 08:47:05 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:27.686 ************************************ 00:09:27.686 START TEST dd_copy_to_out_bdev 00:09:27.686 ************************************ 00:09:27.687 08:47:05 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:27.945 { 00:09:27.945 "subsystems": [ 00:09:27.945 { 00:09:27.945 "subsystem": "bdev", 00:09:27.945 "config": [ 00:09:27.945 { 00:09:27.945 "params": { 00:09:27.945 "trtype": "pcie", 00:09:27.945 "traddr": "0000:00:10.0", 00:09:27.945 "name": "Nvme0" 00:09:27.945 }, 00:09:27.946 "method": "bdev_nvme_attach_controller" 00:09:27.946 }, 00:09:27.946 { 00:09:27.946 "params": { 00:09:27.946 "trtype": "pcie", 00:09:27.946 "traddr": "0000:00:11.0", 00:09:27.946 "name": "Nvme1" 00:09:27.946 }, 00:09:27.946 "method": "bdev_nvme_attach_controller" 00:09:27.946 }, 00:09:27.946 { 00:09:27.946 "method": "bdev_wait_for_examine" 00:09:27.946 } 00:09:27.946 ] 00:09:27.946 } 00:09:27.946 ] 00:09:27.946 } 00:09:27.946 [2024-09-28 08:47:05.728361] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:27.946 [2024-09-28 08:47:05.728512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62956 ] 00:09:27.946 [2024-09-28 08:47:05.883654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.204 [2024-09-28 08:47:06.040334] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.204 [2024-09-28 08:47:06.182867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:31.219  Copying: 45/64 [MB] (45 MBps) Copying: 64/64 [MB] (average 45 MBps) 00:09:31.219 00:09:31.219 00:09:31.219 real 0m3.224s 00:09:31.219 user 0m2.956s 00:09:31.219 sys 0m2.297s 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:31.219 ************************************ 00:09:31.219 END TEST dd_copy_to_out_bdev 00:09:31.219 ************************************ 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:31.219 ************************************ 00:09:31.219 START TEST dd_offset_magic 00:09:31.219 ************************************ 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:31.219 08:47:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:31.219 { 00:09:31.219 "subsystems": [ 00:09:31.220 { 00:09:31.220 "subsystem": "bdev", 00:09:31.220 "config": [ 00:09:31.220 { 00:09:31.220 "params": { 00:09:31.220 "trtype": "pcie", 00:09:31.220 "traddr": "0000:00:10.0", 00:09:31.220 "name": "Nvme0" 00:09:31.220 }, 00:09:31.220 "method": "bdev_nvme_attach_controller" 00:09:31.220 }, 00:09:31.220 { 00:09:31.220 "params": { 00:09:31.220 "trtype": "pcie", 00:09:31.220 "traddr": "0000:00:11.0", 00:09:31.220 "name": "Nvme1" 00:09:31.220 }, 00:09:31.220 "method": "bdev_nvme_attach_controller" 00:09:31.220 }, 00:09:31.220 { 00:09:31.220 "method": "bdev_wait_for_examine" 00:09:31.220 } 00:09:31.220 ] 00:09:31.220 } 00:09:31.220 ] 00:09:31.220 } 00:09:31.220 [2024-09-28 08:47:09.006715] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:31.220 [2024-09-28 08:47:09.006922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63013 ] 00:09:31.220 [2024-09-28 08:47:09.158983] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.479 [2024-09-28 08:47:09.319935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.737 [2024-09-28 08:47:09.477088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.934  Copying: 65/65 [MB] (average 955 MBps) 00:09:32.934 00:09:32.934 08:47:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:32.934 08:47:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:32.934 08:47:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:32.934 08:47:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:32.934 { 00:09:32.934 "subsystems": [ 00:09:32.934 { 00:09:32.934 "subsystem": "bdev", 00:09:32.934 "config": [ 00:09:32.934 { 00:09:32.934 "params": { 00:09:32.934 "trtype": "pcie", 00:09:32.934 "traddr": "0000:00:10.0", 00:09:32.934 "name": "Nvme0" 00:09:32.934 }, 00:09:32.934 "method": "bdev_nvme_attach_controller" 00:09:32.934 }, 00:09:32.934 { 00:09:32.934 "params": { 00:09:32.934 "trtype": "pcie", 00:09:32.934 "traddr": "0000:00:11.0", 00:09:32.934 "name": "Nvme1" 00:09:32.934 }, 00:09:32.934 "method": "bdev_nvme_attach_controller" 00:09:32.934 }, 00:09:32.934 { 00:09:32.934 "method": "bdev_wait_for_examine" 00:09:32.934 } 00:09:32.934 ] 00:09:32.934 } 00:09:32.934 ] 00:09:32.934 } 00:09:32.934 [2024-09-28 08:47:10.759675] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:32.934 [2024-09-28 08:47:10.759856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63044 ] 00:09:32.934 [2024-09-28 08:47:10.911089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.193 [2024-09-28 08:47:11.065549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.452 [2024-09-28 08:47:11.210429] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:34.830  Copying: 1024/1024 [kB] (average 500 MBps) 00:09:34.830 00:09:34.830 08:47:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:34.830 08:47:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:34.830 08:47:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:34.830 08:47:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:34.830 08:47:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:34.830 08:47:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:34.830 08:47:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:34.830 { 00:09:34.830 "subsystems": [ 00:09:34.830 { 00:09:34.830 "subsystem": "bdev", 00:09:34.830 "config": [ 00:09:34.830 { 00:09:34.830 "params": { 00:09:34.830 "trtype": "pcie", 00:09:34.831 "traddr": "0000:00:10.0", 00:09:34.831 "name": "Nvme0" 00:09:34.831 }, 00:09:34.831 "method": "bdev_nvme_attach_controller" 00:09:34.831 }, 00:09:34.831 { 00:09:34.831 "params": { 00:09:34.831 "trtype": "pcie", 00:09:34.831 "traddr": "0000:00:11.0", 00:09:34.831 "name": "Nvme1" 00:09:34.831 }, 00:09:34.831 "method": "bdev_nvme_attach_controller" 00:09:34.831 }, 00:09:34.831 { 00:09:34.831 "method": "bdev_wait_for_examine" 00:09:34.831 } 00:09:34.831 ] 00:09:34.831 } 00:09:34.831 ] 00:09:34.831 } 00:09:34.831 [2024-09-28 08:47:12.560654] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:34.831 [2024-09-28 08:47:12.560886] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63073 ] 00:09:34.831 [2024-09-28 08:47:12.716238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.089 [2024-09-28 08:47:12.872106] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.089 [2024-09-28 08:47:13.019383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.285  Copying: 65/65 [MB] (average 1101 MBps) 00:09:36.285 00:09:36.285 08:47:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:36.285 08:47:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:36.285 08:47:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:36.285 08:47:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:36.285 { 00:09:36.285 "subsystems": [ 00:09:36.285 { 00:09:36.285 "subsystem": "bdev", 00:09:36.285 "config": [ 00:09:36.285 { 00:09:36.285 "params": { 00:09:36.285 "trtype": "pcie", 00:09:36.285 "traddr": "0000:00:10.0", 00:09:36.285 "name": "Nvme0" 00:09:36.285 }, 00:09:36.285 "method": "bdev_nvme_attach_controller" 00:09:36.285 }, 00:09:36.285 { 00:09:36.285 "params": { 00:09:36.285 "trtype": "pcie", 00:09:36.285 "traddr": "0000:00:11.0", 00:09:36.285 "name": "Nvme1" 00:09:36.285 }, 00:09:36.285 "method": "bdev_nvme_attach_controller" 00:09:36.285 }, 00:09:36.285 { 00:09:36.285 "method": "bdev_wait_for_examine" 00:09:36.285 } 00:09:36.285 ] 00:09:36.285 } 00:09:36.285 ] 00:09:36.285 } 00:09:36.545 [2024-09-28 08:47:14.281317] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:36.545 [2024-09-28 08:47:14.281542] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63101 ] 00:09:36.545 [2024-09-28 08:47:14.451435] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.807 [2024-09-28 08:47:14.625576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.807 [2024-09-28 08:47:14.796973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.012  Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:38.012 00:09:38.012 08:47:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:38.012 08:47:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:38.012 00:09:38.012 real 0m7.079s 00:09:38.012 user 0m6.078s 00:09:38.012 sys 0m2.063s 00:09:38.012 08:47:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.012 08:47:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:38.012 ************************************ 00:09:38.012 END TEST dd_offset_magic 00:09:38.012 ************************************ 00:09:38.275 08:47:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:38.275 08:47:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:38.275 08:47:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:38.275 08:47:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:38.275 08:47:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:38.275 08:47:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:38.275 08:47:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:38.275 08:47:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:38.275 08:47:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:38.275 08:47:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:38.275 08:47:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:38.275 { 00:09:38.275 "subsystems": [ 00:09:38.275 { 00:09:38.275 "subsystem": "bdev", 00:09:38.275 "config": [ 00:09:38.275 { 00:09:38.275 "params": { 00:09:38.275 "trtype": "pcie", 00:09:38.275 "traddr": "0000:00:10.0", 00:09:38.275 "name": "Nvme0" 00:09:38.275 }, 00:09:38.275 "method": "bdev_nvme_attach_controller" 00:09:38.275 }, 00:09:38.275 { 00:09:38.275 "params": { 00:09:38.275 "trtype": "pcie", 00:09:38.275 "traddr": "0000:00:11.0", 00:09:38.275 "name": "Nvme1" 00:09:38.275 }, 00:09:38.275 "method": "bdev_nvme_attach_controller" 00:09:38.275 }, 00:09:38.275 { 00:09:38.275 "method": "bdev_wait_for_examine" 00:09:38.275 } 00:09:38.275 ] 00:09:38.275 } 00:09:38.275 ] 00:09:38.275 } 00:09:38.275 [2024-09-28 08:47:16.130224] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:38.275 [2024-09-28 08:47:16.130379] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63150 ] 00:09:38.535 [2024-09-28 08:47:16.278499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.535 [2024-09-28 08:47:16.438289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.794 [2024-09-28 08:47:16.588383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.620  Copying: 5120/5120 [kB] (average 1250 MBps) 00:09:39.620 00:09:39.879 08:47:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:39.879 08:47:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:39.879 08:47:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:39.879 08:47:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:39.879 08:47:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:39.879 08:47:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:39.879 08:47:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:39.879 08:47:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:39.879 08:47:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:39.879 08:47:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:39.879 { 00:09:39.879 "subsystems": [ 00:09:39.879 { 00:09:39.879 "subsystem": "bdev", 00:09:39.879 "config": [ 00:09:39.879 { 00:09:39.879 "params": { 00:09:39.879 "trtype": "pcie", 00:09:39.879 "traddr": "0000:00:10.0", 00:09:39.879 "name": "Nvme0" 00:09:39.879 }, 00:09:39.879 "method": "bdev_nvme_attach_controller" 00:09:39.879 }, 00:09:39.879 { 00:09:39.879 "params": { 00:09:39.879 "trtype": "pcie", 00:09:39.879 "traddr": "0000:00:11.0", 00:09:39.879 "name": "Nvme1" 00:09:39.879 }, 00:09:39.879 "method": "bdev_nvme_attach_controller" 00:09:39.879 }, 00:09:39.879 { 00:09:39.879 "method": "bdev_wait_for_examine" 00:09:39.879 } 00:09:39.879 ] 00:09:39.879 } 00:09:39.879 ] 00:09:39.879 } 00:09:39.879 [2024-09-28 08:47:17.747626] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:39.879 [2024-09-28 08:47:17.747791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63172 ] 00:09:40.138 [2024-09-28 08:47:17.911664] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.138 [2024-09-28 08:47:18.062140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.397 [2024-09-28 08:47:18.208876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:41.592  Copying: 5120/5120 [kB] (average 833 MBps) 00:09:41.592 00:09:41.592 08:47:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:41.592 00:09:41.592 real 0m15.712s 00:09:41.592 user 0m13.369s 00:09:41.592 sys 0m6.953s 00:09:41.592 08:47:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:41.592 08:47:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:41.592 ************************************ 00:09:41.592 END TEST spdk_dd_bdev_to_bdev 00:09:41.592 ************************************ 00:09:41.592 08:47:19 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:41.592 08:47:19 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:41.592 08:47:19 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:41.592 08:47:19 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.592 08:47:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:41.592 ************************************ 00:09:41.592 START TEST spdk_dd_uring 00:09:41.592 ************************************ 00:09:41.592 08:47:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:41.592 * Looking for test storage... 00:09:41.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:41.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.852 --rc genhtml_branch_coverage=1 00:09:41.852 --rc genhtml_function_coverage=1 00:09:41.852 --rc genhtml_legend=1 00:09:41.852 --rc geninfo_all_blocks=1 00:09:41.852 --rc geninfo_unexecuted_blocks=1 00:09:41.852 00:09:41.852 ' 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:41.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.852 --rc genhtml_branch_coverage=1 00:09:41.852 --rc genhtml_function_coverage=1 00:09:41.852 --rc genhtml_legend=1 00:09:41.852 --rc geninfo_all_blocks=1 00:09:41.852 --rc geninfo_unexecuted_blocks=1 00:09:41.852 00:09:41.852 ' 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:41.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.852 --rc genhtml_branch_coverage=1 00:09:41.852 --rc genhtml_function_coverage=1 00:09:41.852 --rc genhtml_legend=1 00:09:41.852 --rc geninfo_all_blocks=1 00:09:41.852 --rc geninfo_unexecuted_blocks=1 00:09:41.852 00:09:41.852 ' 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:41.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.852 --rc genhtml_branch_coverage=1 00:09:41.852 --rc genhtml_function_coverage=1 00:09:41.852 --rc genhtml_legend=1 00:09:41.852 --rc geninfo_all_blocks=1 00:09:41.852 --rc geninfo_unexecuted_blocks=1 00:09:41.852 00:09:41.852 ' 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.852 08:47:19 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:41.853 ************************************ 00:09:41.853 START TEST dd_uring_copy 00:09:41.853 ************************************ 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=e3c7t9r4fzo8d1225b2kd41p2b6pej46j88xculpewhow39dgocmztey2fir048tfop9rcxgr96bf0m6aj64oc5a8bgwcged9eexvqubaehofdx7ju8vecv2do0fztekslp4qxjjlcpnj5of77nsl7v996kbtjes463kkn4o3lbhec0eood1t75i0pyyw3clxy34zaqjv75kw2comhhyymzf8nuexgaxmoy8bilscjzyadyvgffhvizzfalvgqlt9mq3e3mqllsl14xf8ulht6gqekhw1t1nd7gm9jdyjl83relqo04mgzq4o80cqds6d0v7p6j0vyxm0wjtt8v29s43cogi1jye9f9u6pzoxk622ranw0s92peuast1xtzclk48ra1g9zkuo8jvxocif4or57e9zv9i17imms7wryb2kp3s07j1q9x54y61j01cf23gy1ovaoxeqgz38tanapnwxr0uspa2jxjqyrsc1yq3z9vw713dhkbfi5sljjjoss6amc6suzo3abkkfh6i4hsjeu2qqe45ote7b4jybovgl2vxpuqg6950qhfmrrd6tc7mtypgo8ooqpgtrflrq6hfkom3dka31v0qedu3lbgd00yjvowrq87v5b4ml2iykigumm7edakrgyz61chzst6omfnvynm5cbftjo08kfltc1rxnh7kj8xb73f41fv1tpb665rl422nb8u0vf4idzuxk1kkee6ve4dof04wb7yevfpqk2uqk8j66yy82vo6mw1ydno5095988jv45otdk8vopxrl0ll1nh1tzhcb0ckzho7jii0g8w1qplnzb4h1e95zgigdztu1795fs37lam41bxpiu0lcuoa4ztvs9c7ix7gvyoz9xm6dd35chym5rm5cgqrde6ua9fv90apoissvxd9pn5p7tjq9578hv8xp74p55kd76lk5uca538o2wkwepiptxmw6xoxa2uq3atnwyuv43wobbgw9et9alhlxt5jc0ltey3yy90ehgev 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo e3c7t9r4fzo8d1225b2kd41p2b6pej46j88xculpewhow39dgocmztey2fir048tfop9rcxgr96bf0m6aj64oc5a8bgwcged9eexvqubaehofdx7ju8vecv2do0fztekslp4qxjjlcpnj5of77nsl7v996kbtjes463kkn4o3lbhec0eood1t75i0pyyw3clxy34zaqjv75kw2comhhyymzf8nuexgaxmoy8bilscjzyadyvgffhvizzfalvgqlt9mq3e3mqllsl14xf8ulht6gqekhw1t1nd7gm9jdyjl83relqo04mgzq4o80cqds6d0v7p6j0vyxm0wjtt8v29s43cogi1jye9f9u6pzoxk622ranw0s92peuast1xtzclk48ra1g9zkuo8jvxocif4or57e9zv9i17imms7wryb2kp3s07j1q9x54y61j01cf23gy1ovaoxeqgz38tanapnwxr0uspa2jxjqyrsc1yq3z9vw713dhkbfi5sljjjoss6amc6suzo3abkkfh6i4hsjeu2qqe45ote7b4jybovgl2vxpuqg6950qhfmrrd6tc7mtypgo8ooqpgtrflrq6hfkom3dka31v0qedu3lbgd00yjvowrq87v5b4ml2iykigumm7edakrgyz61chzst6omfnvynm5cbftjo08kfltc1rxnh7kj8xb73f41fv1tpb665rl422nb8u0vf4idzuxk1kkee6ve4dof04wb7yevfpqk2uqk8j66yy82vo6mw1ydno5095988jv45otdk8vopxrl0ll1nh1tzhcb0ckzho7jii0g8w1qplnzb4h1e95zgigdztu1795fs37lam41bxpiu0lcuoa4ztvs9c7ix7gvyoz9xm6dd35chym5rm5cgqrde6ua9fv90apoissvxd9pn5p7tjq9578hv8xp74p55kd76lk5uca538o2wkwepiptxmw6xoxa2uq3atnwyuv43wobbgw9et9alhlxt5jc0ltey3yy90ehgev 00:09:41.853 08:47:19 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:42.112 [2024-09-28 08:47:19.866136] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:42.112 [2024-09-28 08:47:19.866317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63262 ] 00:09:42.112 [2024-09-28 08:47:20.039759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.371 [2024-09-28 08:47:20.266611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.631 [2024-09-28 08:47:20.415971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:45.470  Copying: 511/511 [MB] (average 1361 MBps) 00:09:45.470 00:09:45.470 08:47:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:45.470 08:47:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:09:45.470 08:47:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:45.470 08:47:23 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:45.728 { 00:09:45.728 "subsystems": [ 00:09:45.728 { 00:09:45.728 "subsystem": "bdev", 00:09:45.728 "config": [ 00:09:45.728 { 00:09:45.728 "params": { 00:09:45.728 "block_size": 512, 00:09:45.728 "num_blocks": 1048576, 00:09:45.729 "name": "malloc0" 00:09:45.729 }, 00:09:45.729 "method": "bdev_malloc_create" 00:09:45.729 }, 00:09:45.729 { 00:09:45.729 "params": { 00:09:45.729 "filename": "/dev/zram1", 00:09:45.729 "name": "uring0" 00:09:45.729 }, 00:09:45.729 "method": "bdev_uring_create" 00:09:45.729 }, 00:09:45.729 { 00:09:45.729 "method": "bdev_wait_for_examine" 00:09:45.729 } 00:09:45.729 ] 00:09:45.729 } 00:09:45.729 ] 00:09:45.729 } 00:09:45.729 [2024-09-28 08:47:23.551032] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:45.729 [2024-09-28 08:47:23.551227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63312 ] 00:09:45.729 [2024-09-28 08:47:23.721229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.987 [2024-09-28 08:47:23.886001] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.247 [2024-09-28 08:47:24.051654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:51.113  Copying: 213/512 [MB] (213 MBps) Copying: 422/512 [MB] (208 MBps) Copying: 512/512 [MB] (average 210 MBps) 00:09:51.113 00:09:51.372 08:47:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:51.372 08:47:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:09:51.372 08:47:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:51.372 08:47:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:51.372 { 00:09:51.372 "subsystems": [ 00:09:51.372 { 00:09:51.372 "subsystem": "bdev", 00:09:51.372 "config": [ 00:09:51.372 { 00:09:51.372 "params": { 00:09:51.372 "block_size": 512, 00:09:51.372 "num_blocks": 1048576, 00:09:51.372 "name": "malloc0" 00:09:51.372 }, 00:09:51.372 "method": "bdev_malloc_create" 00:09:51.372 }, 00:09:51.372 { 00:09:51.372 "params": { 00:09:51.372 "filename": "/dev/zram1", 00:09:51.372 "name": "uring0" 00:09:51.372 }, 00:09:51.372 "method": "bdev_uring_create" 00:09:51.372 }, 00:09:51.372 { 00:09:51.372 "method": "bdev_wait_for_examine" 00:09:51.372 } 00:09:51.372 ] 00:09:51.372 } 00:09:51.372 ] 00:09:51.372 } 00:09:51.372 [2024-09-28 08:47:29.234865] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:51.372 [2024-09-28 08:47:29.235048] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63388 ] 00:09:51.632 [2024-09-28 08:47:29.407007] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.632 [2024-09-28 08:47:29.603543] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.891 [2024-09-28 08:47:29.765659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:58.407  Copying: 139/512 [MB] (139 MBps) Copying: 288/512 [MB] (149 MBps) Copying: 439/512 [MB] (150 MBps) Copying: 512/512 [MB] (average 147 MBps) 00:09:58.407 00:09:58.407 08:47:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:58.407 08:47:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ e3c7t9r4fzo8d1225b2kd41p2b6pej46j88xculpewhow39dgocmztey2fir048tfop9rcxgr96bf0m6aj64oc5a8bgwcged9eexvqubaehofdx7ju8vecv2do0fztekslp4qxjjlcpnj5of77nsl7v996kbtjes463kkn4o3lbhec0eood1t75i0pyyw3clxy34zaqjv75kw2comhhyymzf8nuexgaxmoy8bilscjzyadyvgffhvizzfalvgqlt9mq3e3mqllsl14xf8ulht6gqekhw1t1nd7gm9jdyjl83relqo04mgzq4o80cqds6d0v7p6j0vyxm0wjtt8v29s43cogi1jye9f9u6pzoxk622ranw0s92peuast1xtzclk48ra1g9zkuo8jvxocif4or57e9zv9i17imms7wryb2kp3s07j1q9x54y61j01cf23gy1ovaoxeqgz38tanapnwxr0uspa2jxjqyrsc1yq3z9vw713dhkbfi5sljjjoss6amc6suzo3abkkfh6i4hsjeu2qqe45ote7b4jybovgl2vxpuqg6950qhfmrrd6tc7mtypgo8ooqpgtrflrq6hfkom3dka31v0qedu3lbgd00yjvowrq87v5b4ml2iykigumm7edakrgyz61chzst6omfnvynm5cbftjo08kfltc1rxnh7kj8xb73f41fv1tpb665rl422nb8u0vf4idzuxk1kkee6ve4dof04wb7yevfpqk2uqk8j66yy82vo6mw1ydno5095988jv45otdk8vopxrl0ll1nh1tzhcb0ckzho7jii0g8w1qplnzb4h1e95zgigdztu1795fs37lam41bxpiu0lcuoa4ztvs9c7ix7gvyoz9xm6dd35chym5rm5cgqrde6ua9fv90apoissvxd9pn5p7tjq9578hv8xp74p55kd76lk5uca538o2wkwepiptxmw6xoxa2uq3atnwyuv43wobbgw9et9alhlxt5jc0ltey3yy90ehgev == \e\3\c\7\t\9\r\4\f\z\o\8\d\1\2\2\5\b\2\k\d\4\1\p\2\b\6\p\e\j\4\6\j\8\8\x\c\u\l\p\e\w\h\o\w\3\9\d\g\o\c\m\z\t\e\y\2\f\i\r\0\4\8\t\f\o\p\9\r\c\x\g\r\9\6\b\f\0\m\6\a\j\6\4\o\c\5\a\8\b\g\w\c\g\e\d\9\e\e\x\v\q\u\b\a\e\h\o\f\d\x\7\j\u\8\v\e\c\v\2\d\o\0\f\z\t\e\k\s\l\p\4\q\x\j\j\l\c\p\n\j\5\o\f\7\7\n\s\l\7\v\9\9\6\k\b\t\j\e\s\4\6\3\k\k\n\4\o\3\l\b\h\e\c\0\e\o\o\d\1\t\7\5\i\0\p\y\y\w\3\c\l\x\y\3\4\z\a\q\j\v\7\5\k\w\2\c\o\m\h\h\y\y\m\z\f\8\n\u\e\x\g\a\x\m\o\y\8\b\i\l\s\c\j\z\y\a\d\y\v\g\f\f\h\v\i\z\z\f\a\l\v\g\q\l\t\9\m\q\3\e\3\m\q\l\l\s\l\1\4\x\f\8\u\l\h\t\6\g\q\e\k\h\w\1\t\1\n\d\7\g\m\9\j\d\y\j\l\8\3\r\e\l\q\o\0\4\m\g\z\q\4\o\8\0\c\q\d\s\6\d\0\v\7\p\6\j\0\v\y\x\m\0\w\j\t\t\8\v\2\9\s\4\3\c\o\g\i\1\j\y\e\9\f\9\u\6\p\z\o\x\k\6\2\2\r\a\n\w\0\s\9\2\p\e\u\a\s\t\1\x\t\z\c\l\k\4\8\r\a\1\g\9\z\k\u\o\8\j\v\x\o\c\i\f\4\o\r\5\7\e\9\z\v\9\i\1\7\i\m\m\s\7\w\r\y\b\2\k\p\3\s\0\7\j\1\q\9\x\5\4\y\6\1\j\0\1\c\f\2\3\g\y\1\o\v\a\o\x\e\q\g\z\3\8\t\a\n\a\p\n\w\x\r\0\u\s\p\a\2\j\x\j\q\y\r\s\c\1\y\q\3\z\9\v\w\7\1\3\d\h\k\b\f\i\5\s\l\j\j\j\o\s\s\6\a\m\c\6\s\u\z\o\3\a\b\k\k\f\h\6\i\4\h\s\j\e\u\2\q\q\e\4\5\o\t\e\7\b\4\j\y\b\o\v\g\l\2\v\x\p\u\q\g\6\9\5\0\q\h\f\m\r\r\d\6\t\c\7\m\t\y\p\g\o\8\o\o\q\p\g\t\r\f\l\r\q\6\h\f\k\o\m\3\d\k\a\3\1\v\0\q\e\d\u\3\l\b\g\d\0\0\y\j\v\o\w\r\q\8\7\v\5\b\4\m\l\2\i\y\k\i\g\u\m\m\7\e\d\a\k\r\g\y\z\6\1\c\h\z\s\t\6\o\m\f\n\v\y\n\m\5\c\b\f\t\j\o\0\8\k\f\l\t\c\1\r\x\n\h\7\k\j\8\x\b\7\3\f\4\1\f\v\1\t\p\b\6\6\5\r\l\4\2\2\n\b\8\u\0\v\f\4\i\d\z\u\x\k\1\k\k\e\e\6\v\e\4\d\o\f\0\4\w\b\7\y\e\v\f\p\q\k\2\u\q\k\8\j\6\6\y\y\8\2\v\o\6\m\w\1\y\d\n\o\5\0\9\5\9\8\8\j\v\4\5\o\t\d\k\8\v\o\p\x\r\l\0\l\l\1\n\h\1\t\z\h\c\b\0\c\k\z\h\o\7\j\i\i\0\g\8\w\1\q\p\l\n\z\b\4\h\1\e\9\5\z\g\i\g\d\z\t\u\1\7\9\5\f\s\3\7\l\a\m\4\1\b\x\p\i\u\0\l\c\u\o\a\4\z\t\v\s\9\c\7\i\x\7\g\v\y\o\z\9\x\m\6\d\d\3\5\c\h\y\m\5\r\m\5\c\g\q\r\d\e\6\u\a\9\f\v\9\0\a\p\o\i\s\s\v\x\d\9\p\n\5\p\7\t\j\q\9\5\7\8\h\v\8\x\p\7\4\p\5\5\k\d\7\6\l\k\5\u\c\a\5\3\8\o\2\w\k\w\e\p\i\p\t\x\m\w\6\x\o\x\a\2\u\q\3\a\t\n\w\y\u\v\4\3\w\o\b\b\g\w\9\e\t\9\a\l\h\l\x\t\5\j\c\0\l\t\e\y\3\y\y\9\0\e\h\g\e\v ]] 00:09:58.407 08:47:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:58.407 08:47:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ e3c7t9r4fzo8d1225b2kd41p2b6pej46j88xculpewhow39dgocmztey2fir048tfop9rcxgr96bf0m6aj64oc5a8bgwcged9eexvqubaehofdx7ju8vecv2do0fztekslp4qxjjlcpnj5of77nsl7v996kbtjes463kkn4o3lbhec0eood1t75i0pyyw3clxy34zaqjv75kw2comhhyymzf8nuexgaxmoy8bilscjzyadyvgffhvizzfalvgqlt9mq3e3mqllsl14xf8ulht6gqekhw1t1nd7gm9jdyjl83relqo04mgzq4o80cqds6d0v7p6j0vyxm0wjtt8v29s43cogi1jye9f9u6pzoxk622ranw0s92peuast1xtzclk48ra1g9zkuo8jvxocif4or57e9zv9i17imms7wryb2kp3s07j1q9x54y61j01cf23gy1ovaoxeqgz38tanapnwxr0uspa2jxjqyrsc1yq3z9vw713dhkbfi5sljjjoss6amc6suzo3abkkfh6i4hsjeu2qqe45ote7b4jybovgl2vxpuqg6950qhfmrrd6tc7mtypgo8ooqpgtrflrq6hfkom3dka31v0qedu3lbgd00yjvowrq87v5b4ml2iykigumm7edakrgyz61chzst6omfnvynm5cbftjo08kfltc1rxnh7kj8xb73f41fv1tpb665rl422nb8u0vf4idzuxk1kkee6ve4dof04wb7yevfpqk2uqk8j66yy82vo6mw1ydno5095988jv45otdk8vopxrl0ll1nh1tzhcb0ckzho7jii0g8w1qplnzb4h1e95zgigdztu1795fs37lam41bxpiu0lcuoa4ztvs9c7ix7gvyoz9xm6dd35chym5rm5cgqrde6ua9fv90apoissvxd9pn5p7tjq9578hv8xp74p55kd76lk5uca538o2wkwepiptxmw6xoxa2uq3atnwyuv43wobbgw9et9alhlxt5jc0ltey3yy90ehgev == \e\3\c\7\t\9\r\4\f\z\o\8\d\1\2\2\5\b\2\k\d\4\1\p\2\b\6\p\e\j\4\6\j\8\8\x\c\u\l\p\e\w\h\o\w\3\9\d\g\o\c\m\z\t\e\y\2\f\i\r\0\4\8\t\f\o\p\9\r\c\x\g\r\9\6\b\f\0\m\6\a\j\6\4\o\c\5\a\8\b\g\w\c\g\e\d\9\e\e\x\v\q\u\b\a\e\h\o\f\d\x\7\j\u\8\v\e\c\v\2\d\o\0\f\z\t\e\k\s\l\p\4\q\x\j\j\l\c\p\n\j\5\o\f\7\7\n\s\l\7\v\9\9\6\k\b\t\j\e\s\4\6\3\k\k\n\4\o\3\l\b\h\e\c\0\e\o\o\d\1\t\7\5\i\0\p\y\y\w\3\c\l\x\y\3\4\z\a\q\j\v\7\5\k\w\2\c\o\m\h\h\y\y\m\z\f\8\n\u\e\x\g\a\x\m\o\y\8\b\i\l\s\c\j\z\y\a\d\y\v\g\f\f\h\v\i\z\z\f\a\l\v\g\q\l\t\9\m\q\3\e\3\m\q\l\l\s\l\1\4\x\f\8\u\l\h\t\6\g\q\e\k\h\w\1\t\1\n\d\7\g\m\9\j\d\y\j\l\8\3\r\e\l\q\o\0\4\m\g\z\q\4\o\8\0\c\q\d\s\6\d\0\v\7\p\6\j\0\v\y\x\m\0\w\j\t\t\8\v\2\9\s\4\3\c\o\g\i\1\j\y\e\9\f\9\u\6\p\z\o\x\k\6\2\2\r\a\n\w\0\s\9\2\p\e\u\a\s\t\1\x\t\z\c\l\k\4\8\r\a\1\g\9\z\k\u\o\8\j\v\x\o\c\i\f\4\o\r\5\7\e\9\z\v\9\i\1\7\i\m\m\s\7\w\r\y\b\2\k\p\3\s\0\7\j\1\q\9\x\5\4\y\6\1\j\0\1\c\f\2\3\g\y\1\o\v\a\o\x\e\q\g\z\3\8\t\a\n\a\p\n\w\x\r\0\u\s\p\a\2\j\x\j\q\y\r\s\c\1\y\q\3\z\9\v\w\7\1\3\d\h\k\b\f\i\5\s\l\j\j\j\o\s\s\6\a\m\c\6\s\u\z\o\3\a\b\k\k\f\h\6\i\4\h\s\j\e\u\2\q\q\e\4\5\o\t\e\7\b\4\j\y\b\o\v\g\l\2\v\x\p\u\q\g\6\9\5\0\q\h\f\m\r\r\d\6\t\c\7\m\t\y\p\g\o\8\o\o\q\p\g\t\r\f\l\r\q\6\h\f\k\o\m\3\d\k\a\3\1\v\0\q\e\d\u\3\l\b\g\d\0\0\y\j\v\o\w\r\q\8\7\v\5\b\4\m\l\2\i\y\k\i\g\u\m\m\7\e\d\a\k\r\g\y\z\6\1\c\h\z\s\t\6\o\m\f\n\v\y\n\m\5\c\b\f\t\j\o\0\8\k\f\l\t\c\1\r\x\n\h\7\k\j\8\x\b\7\3\f\4\1\f\v\1\t\p\b\6\6\5\r\l\4\2\2\n\b\8\u\0\v\f\4\i\d\z\u\x\k\1\k\k\e\e\6\v\e\4\d\o\f\0\4\w\b\7\y\e\v\f\p\q\k\2\u\q\k\8\j\6\6\y\y\8\2\v\o\6\m\w\1\y\d\n\o\5\0\9\5\9\8\8\j\v\4\5\o\t\d\k\8\v\o\p\x\r\l\0\l\l\1\n\h\1\t\z\h\c\b\0\c\k\z\h\o\7\j\i\i\0\g\8\w\1\q\p\l\n\z\b\4\h\1\e\9\5\z\g\i\g\d\z\t\u\1\7\9\5\f\s\3\7\l\a\m\4\1\b\x\p\i\u\0\l\c\u\o\a\4\z\t\v\s\9\c\7\i\x\7\g\v\y\o\z\9\x\m\6\d\d\3\5\c\h\y\m\5\r\m\5\c\g\q\r\d\e\6\u\a\9\f\v\9\0\a\p\o\i\s\s\v\x\d\9\p\n\5\p\7\t\j\q\9\5\7\8\h\v\8\x\p\7\4\p\5\5\k\d\7\6\l\k\5\u\c\a\5\3\8\o\2\w\k\w\e\p\i\p\t\x\m\w\6\x\o\x\a\2\u\q\3\a\t\n\w\y\u\v\4\3\w\o\b\b\g\w\9\e\t\9\a\l\h\l\x\t\5\j\c\0\l\t\e\y\3\y\y\9\0\e\h\g\e\v ]] 00:09:58.407 08:47:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:58.407 08:47:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:58.407 08:47:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:58.407 08:47:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:58.407 08:47:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:58.407 { 00:09:58.407 "subsystems": [ 00:09:58.407 { 00:09:58.407 "subsystem": "bdev", 00:09:58.407 "config": [ 00:09:58.407 { 00:09:58.407 "params": { 00:09:58.407 "block_size": 512, 00:09:58.407 "num_blocks": 1048576, 00:09:58.407 "name": "malloc0" 00:09:58.407 }, 00:09:58.407 "method": "bdev_malloc_create" 00:09:58.407 }, 00:09:58.407 { 00:09:58.407 "params": { 00:09:58.407 "filename": "/dev/zram1", 00:09:58.407 "name": "uring0" 00:09:58.407 }, 00:09:58.407 "method": "bdev_uring_create" 00:09:58.407 }, 00:09:58.407 { 00:09:58.407 "method": "bdev_wait_for_examine" 00:09:58.407 } 00:09:58.407 ] 00:09:58.407 } 00:09:58.407 ] 00:09:58.407 } 00:09:58.407 [2024-09-28 08:47:36.387899] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:58.407 [2024-09-28 08:47:36.388057] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63514 ] 00:09:58.666 [2024-09-28 08:47:36.547974] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.925 [2024-09-28 08:47:36.712420] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.925 [2024-09-28 08:47:36.874960] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:05.802  Copying: 139/512 [MB] (139 MBps) Copying: 279/512 [MB] (139 MBps) Copying: 412/512 [MB] (133 MBps) Copying: 512/512 [MB] (average 135 MBps) 00:10:05.802 00:10:05.802 08:47:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:10:05.802 08:47:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:10:05.802 08:47:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:10:05.802 08:47:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:10:05.802 08:47:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:10:05.802 08:47:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:10:05.802 08:47:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:05.802 08:47:43 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:05.802 { 00:10:05.802 "subsystems": [ 00:10:05.802 { 00:10:05.802 "subsystem": "bdev", 00:10:05.802 "config": [ 00:10:05.802 { 00:10:05.802 "params": { 00:10:05.802 "block_size": 512, 00:10:05.802 "num_blocks": 1048576, 00:10:05.802 "name": "malloc0" 00:10:05.802 }, 00:10:05.802 "method": "bdev_malloc_create" 00:10:05.802 }, 00:10:05.802 { 00:10:05.802 "params": { 00:10:05.802 "filename": "/dev/zram1", 00:10:05.802 "name": "uring0" 00:10:05.802 }, 00:10:05.802 "method": "bdev_uring_create" 00:10:05.802 }, 00:10:05.802 { 00:10:05.802 "params": { 00:10:05.802 "name": "uring0" 00:10:05.802 }, 00:10:05.802 "method": "bdev_uring_delete" 00:10:05.802 }, 00:10:05.802 { 00:10:05.802 "method": "bdev_wait_for_examine" 00:10:05.802 } 00:10:05.802 ] 00:10:05.802 } 00:10:05.802 ] 00:10:05.802 } 00:10:05.802 [2024-09-28 08:47:43.503510] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:05.802 [2024-09-28 08:47:43.503667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63599 ] 00:10:05.802 [2024-09-28 08:47:43.655531] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.061 [2024-09-28 08:47:43.835216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.061 [2024-09-28 08:47:43.997244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:09.168  Copying: 0/0 [B] (average 0 Bps) 00:10:09.168 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:09.168 08:47:46 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:10:09.168 { 00:10:09.168 "subsystems": [ 00:10:09.168 { 00:10:09.168 "subsystem": "bdev", 00:10:09.168 "config": [ 00:10:09.168 { 00:10:09.168 "params": { 00:10:09.168 "block_size": 512, 00:10:09.168 "num_blocks": 1048576, 00:10:09.168 "name": "malloc0" 00:10:09.168 }, 00:10:09.168 "method": "bdev_malloc_create" 00:10:09.168 }, 00:10:09.168 { 00:10:09.168 "params": { 00:10:09.168 "filename": "/dev/zram1", 00:10:09.168 "name": "uring0" 00:10:09.168 }, 00:10:09.168 "method": "bdev_uring_create" 00:10:09.168 }, 00:10:09.168 { 00:10:09.168 "params": { 00:10:09.168 "name": "uring0" 00:10:09.168 }, 00:10:09.168 "method": "bdev_uring_delete" 00:10:09.169 }, 00:10:09.169 { 00:10:09.169 "method": "bdev_wait_for_examine" 00:10:09.169 } 00:10:09.169 ] 00:10:09.169 } 00:10:09.169 ] 00:10:09.169 } 00:10:09.169 [2024-09-28 08:47:46.717933] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:09.169 [2024-09-28 08:47:46.718116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63651 ] 00:10:09.169 [2024-09-28 08:47:46.887421] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.169 [2024-09-28 08:47:47.060453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.428 [2024-09-28 08:47:47.222575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:09.996 [2024-09-28 08:47:47.746048] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:10:09.996 [2024-09-28 08:47:47.746126] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:10:09.996 [2024-09-28 08:47:47.746142] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:10:09.996 [2024-09-28 08:47:47.746165] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:11.373 [2024-09-28 08:47:49.356107] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:11.941 08:47:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:10:11.941 08:47:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:11.941 08:47:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:10:11.941 08:47:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:10:11.941 08:47:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:10:11.941 08:47:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:11.941 08:47:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:10:11.941 08:47:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:10:11.941 08:47:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:10:11.941 08:47:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:10:11.941 08:47:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:10:11.941 08:47:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:10:12.199 00:10:12.199 real 0m30.250s 00:10:12.199 user 0m24.870s 00:10:12.199 sys 0m16.103s 00:10:12.199 08:47:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.199 08:47:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:10:12.199 ************************************ 00:10:12.199 END TEST dd_uring_copy 00:10:12.199 ************************************ 00:10:12.199 00:10:12.200 real 0m30.507s 00:10:12.200 user 0m25.008s 00:10:12.200 sys 0m16.225s 00:10:12.200 08:47:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.200 ************************************ 00:10:12.200 END TEST spdk_dd_uring 00:10:12.200 08:47:50 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:10:12.200 ************************************ 00:10:12.200 08:47:50 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:10:12.200 08:47:50 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:12.200 08:47:50 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.200 08:47:50 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:12.200 ************************************ 00:10:12.200 START TEST spdk_dd_sparse 00:10:12.200 ************************************ 00:10:12.200 08:47:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:10:12.200 * Looking for test storage... 00:10:12.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:12.200 08:47:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:12.200 08:47:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:12.200 08:47:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:12.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.459 --rc genhtml_branch_coverage=1 00:10:12.459 --rc genhtml_function_coverage=1 00:10:12.459 --rc genhtml_legend=1 00:10:12.459 --rc geninfo_all_blocks=1 00:10:12.459 --rc geninfo_unexecuted_blocks=1 00:10:12.459 00:10:12.459 ' 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:12.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.459 --rc genhtml_branch_coverage=1 00:10:12.459 --rc genhtml_function_coverage=1 00:10:12.459 --rc genhtml_legend=1 00:10:12.459 --rc geninfo_all_blocks=1 00:10:12.459 --rc geninfo_unexecuted_blocks=1 00:10:12.459 00:10:12.459 ' 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:12.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.459 --rc genhtml_branch_coverage=1 00:10:12.459 --rc genhtml_function_coverage=1 00:10:12.459 --rc genhtml_legend=1 00:10:12.459 --rc geninfo_all_blocks=1 00:10:12.459 --rc geninfo_unexecuted_blocks=1 00:10:12.459 00:10:12.459 ' 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:12.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.459 --rc genhtml_branch_coverage=1 00:10:12.459 --rc genhtml_function_coverage=1 00:10:12.459 --rc genhtml_legend=1 00:10:12.459 --rc geninfo_all_blocks=1 00:10:12.459 --rc geninfo_unexecuted_blocks=1 00:10:12.459 00:10:12.459 ' 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:10:12.459 08:47:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:10:12.460 1+0 records in 00:10:12.460 1+0 records out 00:10:12.460 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00645896 s, 649 MB/s 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:10:12.460 1+0 records in 00:10:12.460 1+0 records out 00:10:12.460 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00560926 s, 748 MB/s 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:10:12.460 1+0 records in 00:10:12.460 1+0 records out 00:10:12.460 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00445434 s, 942 MB/s 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:12.460 ************************************ 00:10:12.460 START TEST dd_sparse_file_to_file 00:10:12.460 ************************************ 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:10:12.460 08:47:50 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:12.460 { 00:10:12.460 "subsystems": [ 00:10:12.460 { 00:10:12.460 "subsystem": "bdev", 00:10:12.460 "config": [ 00:10:12.460 { 00:10:12.460 "params": { 00:10:12.460 "block_size": 4096, 00:10:12.460 "filename": "dd_sparse_aio_disk", 00:10:12.460 "name": "dd_aio" 00:10:12.460 }, 00:10:12.460 "method": "bdev_aio_create" 00:10:12.460 }, 00:10:12.460 { 00:10:12.460 "params": { 00:10:12.460 "lvs_name": "dd_lvstore", 00:10:12.460 "bdev_name": "dd_aio" 00:10:12.460 }, 00:10:12.460 "method": "bdev_lvol_create_lvstore" 00:10:12.460 }, 00:10:12.460 { 00:10:12.460 "method": "bdev_wait_for_examine" 00:10:12.460 } 00:10:12.460 ] 00:10:12.460 } 00:10:12.460 ] 00:10:12.460 } 00:10:12.460 [2024-09-28 08:47:50.418066] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:12.460 [2024-09-28 08:47:50.418545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63773 ] 00:10:12.719 [2024-09-28 08:47:50.594132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.978 [2024-09-28 08:47:50.747858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.978 [2024-09-28 08:47:50.893262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:14.609  Copying: 12/36 [MB] (average 1000 MBps) 00:10:14.609 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:14.609 00:10:14.609 real 0m1.926s 00:10:14.609 user 0m1.605s 00:10:14.609 sys 0m0.902s 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:14.609 ************************************ 00:10:14.609 END TEST dd_sparse_file_to_file 00:10:14.609 ************************************ 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:14.609 ************************************ 00:10:14.609 START TEST dd_sparse_file_to_bdev 00:10:14.609 ************************************ 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:10:14.609 08:47:52 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:14.609 { 00:10:14.609 "subsystems": [ 00:10:14.609 { 00:10:14.609 "subsystem": "bdev", 00:10:14.609 "config": [ 00:10:14.609 { 00:10:14.609 "params": { 00:10:14.609 "block_size": 4096, 00:10:14.609 "filename": "dd_sparse_aio_disk", 00:10:14.609 "name": "dd_aio" 00:10:14.609 }, 00:10:14.609 "method": "bdev_aio_create" 00:10:14.609 }, 00:10:14.609 { 00:10:14.609 "params": { 00:10:14.609 "lvs_name": "dd_lvstore", 00:10:14.609 "lvol_name": "dd_lvol", 00:10:14.610 "size_in_mib": 36, 00:10:14.610 "thin_provision": true 00:10:14.610 }, 00:10:14.610 "method": "bdev_lvol_create" 00:10:14.610 }, 00:10:14.610 { 00:10:14.610 "method": "bdev_wait_for_examine" 00:10:14.610 } 00:10:14.610 ] 00:10:14.610 } 00:10:14.610 ] 00:10:14.610 } 00:10:14.610 [2024-09-28 08:47:52.350141] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:14.610 [2024-09-28 08:47:52.350294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63828 ] 00:10:14.610 [2024-09-28 08:47:52.513792] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.868 [2024-09-28 08:47:52.703874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.125 [2024-09-28 08:47:52.885380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:16.512  Copying: 12/36 [MB] (average 600 MBps) 00:10:16.512 00:10:16.512 00:10:16.512 real 0m1.997s 00:10:16.512 user 0m1.691s 00:10:16.512 sys 0m0.987s 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.512 ************************************ 00:10:16.512 END TEST dd_sparse_file_to_bdev 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:10:16.512 ************************************ 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:16.512 ************************************ 00:10:16.512 START TEST dd_sparse_bdev_to_file 00:10:16.512 ************************************ 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:10:16.512 08:47:54 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:16.512 { 00:10:16.512 "subsystems": [ 00:10:16.512 { 00:10:16.512 "subsystem": "bdev", 00:10:16.512 "config": [ 00:10:16.512 { 00:10:16.512 "params": { 00:10:16.512 "block_size": 4096, 00:10:16.512 "filename": "dd_sparse_aio_disk", 00:10:16.512 "name": "dd_aio" 00:10:16.512 }, 00:10:16.512 "method": "bdev_aio_create" 00:10:16.512 }, 00:10:16.512 { 00:10:16.512 "method": "bdev_wait_for_examine" 00:10:16.512 } 00:10:16.512 ] 00:10:16.512 } 00:10:16.512 ] 00:10:16.512 } 00:10:16.512 [2024-09-28 08:47:54.420555] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:16.512 [2024-09-28 08:47:54.420826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63878 ] 00:10:16.782 [2024-09-28 08:47:54.618909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.040 [2024-09-28 08:47:54.845196] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.297 [2024-09-28 08:47:55.040882] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:18.673  Copying: 12/36 [MB] (average 923 MBps) 00:10:18.673 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:10:18.673 00:10:18.673 real 0m2.174s 00:10:18.673 user 0m1.859s 00:10:18.673 sys 0m1.033s 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:10:18.673 ************************************ 00:10:18.673 END TEST dd_sparse_bdev_to_file 00:10:18.673 ************************************ 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:10:18.673 00:10:18.673 real 0m6.477s 00:10:18.673 user 0m5.325s 00:10:18.673 sys 0m3.131s 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.673 ************************************ 00:10:18.673 END TEST spdk_dd_sparse 00:10:18.673 08:47:56 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:10:18.673 ************************************ 00:10:18.673 08:47:56 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:10:18.673 08:47:56 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:18.673 08:47:56 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.673 08:47:56 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:18.673 ************************************ 00:10:18.673 START TEST spdk_dd_negative 00:10:18.673 ************************************ 00:10:18.673 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:10:18.673 * Looking for test storage... 00:10:18.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:10:18.673 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.933 --rc genhtml_branch_coverage=1 00:10:18.933 --rc genhtml_function_coverage=1 00:10:18.933 --rc genhtml_legend=1 00:10:18.933 --rc geninfo_all_blocks=1 00:10:18.933 --rc geninfo_unexecuted_blocks=1 00:10:18.933 00:10:18.933 ' 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.933 --rc genhtml_branch_coverage=1 00:10:18.933 --rc genhtml_function_coverage=1 00:10:18.933 --rc genhtml_legend=1 00:10:18.933 --rc geninfo_all_blocks=1 00:10:18.933 --rc geninfo_unexecuted_blocks=1 00:10:18.933 00:10:18.933 ' 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.933 --rc genhtml_branch_coverage=1 00:10:18.933 --rc genhtml_function_coverage=1 00:10:18.933 --rc genhtml_legend=1 00:10:18.933 --rc geninfo_all_blocks=1 00:10:18.933 --rc geninfo_unexecuted_blocks=1 00:10:18.933 00:10:18.933 ' 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:18.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.933 --rc genhtml_branch_coverage=1 00:10:18.933 --rc genhtml_function_coverage=1 00:10:18.933 --rc genhtml_legend=1 00:10:18.933 --rc geninfo_all_blocks=1 00:10:18.933 --rc geninfo_unexecuted_blocks=1 00:10:18.933 00:10:18.933 ' 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:18.933 ************************************ 00:10:18.933 START TEST dd_invalid_arguments 00:10:18.933 ************************************ 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:18.933 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.934 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:18.934 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:18.934 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:10:18.934 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:10:18.934 00:10:18.934 CPU options: 00:10:18.934 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:10:18.934 (like [0,1,10]) 00:10:18.934 --lcores lcore to CPU mapping list. The list is in the format: 00:10:18.934 [<,lcores[@CPUs]>...] 00:10:18.934 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:10:18.934 Within the group, '-' is used for range separator, 00:10:18.934 ',' is used for single number separator. 00:10:18.934 '( )' can be omitted for single element group, 00:10:18.934 '@' can be omitted if cpus and lcores have the same value 00:10:18.934 --disable-cpumask-locks Disable CPU core lock files. 00:10:18.934 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:10:18.934 pollers in the app support interrupt mode) 00:10:18.934 -p, --main-core main (primary) core for DPDK 00:10:18.934 00:10:18.934 Configuration options: 00:10:18.934 -c, --config, --json JSON config file 00:10:18.934 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:10:18.934 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:10:18.934 --wait-for-rpc wait for RPCs to initialize subsystems 00:10:18.934 --rpcs-allowed comma-separated list of permitted RPCS 00:10:18.934 --json-ignore-init-errors don't exit on invalid config entry 00:10:18.934 00:10:18.934 Memory options: 00:10:18.934 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:10:18.934 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:10:18.934 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:10:18.934 -R, --huge-unlink unlink huge files after initialization 00:10:18.934 -n, --mem-channels number of memory channels used for DPDK 00:10:18.934 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:10:18.934 --msg-mempool-size global message memory pool size in count (default: 262143) 00:10:18.934 --no-huge run without using hugepages 00:10:18.934 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:10:18.934 -i, --shm-id shared memory ID (optional) 00:10:18.934 -g, --single-file-segments force creating just one hugetlbfs file 00:10:18.934 00:10:18.934 PCI options: 00:10:18.934 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:10:18.934 -B, --pci-blocked pci addr to block (can be used more than once) 00:10:18.934 -u, --no-pci disable PCI access 00:10:18.934 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:10:18.934 00:10:18.934 Log options: 00:10:18.934 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:10:18.934 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:10:18.934 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:10:18.934 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:10:18.934 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:10:18.934 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:10:18.934 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:10:18.934 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:10:18.934 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:10:18.934 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:10:18.934 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:10:18.934 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:10:18.934 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:10:18.934 --silence-noticelog disable notice level logging to stderr 00:10:18.934 00:10:18.934 Trace options: 00:10:18.934 --num-trace-entries number of trace entries for each core, must be power of 2, 00:10:18.934 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:10:18.934 [2024-09-28 08:47:56.875913] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:10:18.934 setting 0 to disable trace (default 32768) 00:10:18.934 Tracepoints vary in size and can use more than one trace entry. 00:10:18.934 -e, --tpoint-group [:] 00:10:18.934 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:10:18.934 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:10:18.934 blob, bdev_raid, all). 00:10:18.934 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:10:18.934 a tracepoint group. First tpoint inside a group can be enabled by 00:10:18.934 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:10:18.934 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:10:18.934 in /include/spdk_internal/trace_defs.h 00:10:18.934 00:10:18.934 Other options: 00:10:18.934 -h, --help show this usage 00:10:18.934 -v, --version print SPDK version 00:10:18.934 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:10:18.934 --env-context Opaque context for use of the env implementation 00:10:18.934 00:10:18.934 Application specific: 00:10:18.934 [--------- DD Options ---------] 00:10:18.934 --if Input file. Must specify either --if or --ib. 00:10:18.934 --ib Input bdev. Must specifier either --if or --ib 00:10:18.934 --of Output file. Must specify either --of or --ob. 00:10:18.934 --ob Output bdev. Must specify either --of or --ob. 00:10:18.934 --iflag Input file flags. 00:10:18.934 --oflag Output file flags. 00:10:18.934 --bs I/O unit size (default: 4096) 00:10:18.934 --qd Queue depth (default: 2) 00:10:18.934 --count I/O unit count. The number of I/O units to copy. (default: all) 00:10:18.934 --skip Skip this many I/O units at start of input. (default: 0) 00:10:18.934 --seek Skip this many I/O units at start of output. (default: 0) 00:10:18.934 --aio Force usage of AIO. (by default io_uring is used if available) 00:10:18.934 --sparse Enable hole skipping in input target 00:10:18.934 Available iflag and oflag values: 00:10:18.934 append - append mode 00:10:18.934 direct - use direct I/O for data 00:10:18.934 directory - fail unless a directory 00:10:18.934 dsync - use synchronized I/O for data 00:10:18.934 noatime - do not update access time 00:10:18.934 noctty - do not assign controlling terminal from file 00:10:18.934 nofollow - do not follow symlinks 00:10:18.934 nonblock - use non-blocking I/O 00:10:18.934 sync - use synchronized I/O for data and metadata 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:19.193 00:10:19.193 real 0m0.158s 00:10:19.193 user 0m0.090s 00:10:19.193 sys 0m0.066s 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:10:19.193 ************************************ 00:10:19.193 END TEST dd_invalid_arguments 00:10:19.193 ************************************ 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:19.193 ************************************ 00:10:19.193 START TEST dd_double_input 00:10:19.193 ************************************ 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:19.193 08:47:56 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:10:19.193 [2024-09-28 08:47:57.078752] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:19.193 00:10:19.193 real 0m0.159s 00:10:19.193 user 0m0.093s 00:10:19.193 sys 0m0.065s 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:10:19.193 ************************************ 00:10:19.193 END TEST dd_double_input 00:10:19.193 ************************************ 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:19.193 ************************************ 00:10:19.193 START TEST dd_double_output 00:10:19.193 ************************************ 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.193 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:10:19.452 [2024-09-28 08:47:57.288075] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:19.452 00:10:19.452 real 0m0.164s 00:10:19.452 user 0m0.096s 00:10:19.452 sys 0m0.066s 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.452 ************************************ 00:10:19.452 END TEST dd_double_output 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:10:19.452 ************************************ 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:19.452 ************************************ 00:10:19.452 START TEST dd_no_input 00:10:19.452 ************************************ 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:19.452 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:10:19.711 [2024-09-28 08:47:57.492908] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:19.711 00:10:19.711 real 0m0.155s 00:10:19.711 user 0m0.077s 00:10:19.711 sys 0m0.077s 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:10:19.711 ************************************ 00:10:19.711 END TEST dd_no_input 00:10:19.711 ************************************ 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:19.711 ************************************ 00:10:19.711 START TEST dd_no_output 00:10:19.711 ************************************ 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.711 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.712 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.712 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.712 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.712 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.712 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.712 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:19.712 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:10:19.969 [2024-09-28 08:47:57.715072] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:10:19.969 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:10:19.969 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:19.969 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:19.969 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:19.969 00:10:19.969 real 0m0.173s 00:10:19.969 user 0m0.088s 00:10:19.969 sys 0m0.083s 00:10:19.969 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.969 ************************************ 00:10:19.969 END TEST dd_no_output 00:10:19.969 ************************************ 00:10:19.969 08:47:57 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:10:19.969 08:47:57 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:10:19.969 08:47:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:19.969 08:47:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.969 08:47:57 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:19.969 ************************************ 00:10:19.969 START TEST dd_wrong_blocksize 00:10:19.970 ************************************ 00:10:19.970 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:10:19.970 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:19.970 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:10:19.970 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:19.970 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.970 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.970 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.970 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.970 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.970 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:19.970 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:19.970 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:19.970 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:10:19.970 [2024-09-28 08:47:57.933076] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:10:20.228 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:10:20.228 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:20.228 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:20.228 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:20.228 00:10:20.228 real 0m0.166s 00:10:20.228 user 0m0.088s 00:10:20.228 sys 0m0.077s 00:10:20.228 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.228 08:47:57 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:10:20.228 ************************************ 00:10:20.228 END TEST dd_wrong_blocksize 00:10:20.228 ************************************ 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:20.228 ************************************ 00:10:20.228 START TEST dd_smaller_blocksize 00:10:20.228 ************************************ 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:20.228 08:47:58 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:10:20.228 [2024-09-28 08:47:58.144015] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:20.228 [2024-09-28 08:47:58.144191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64139 ] 00:10:20.486 [2024-09-28 08:47:58.316715] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.744 [2024-09-28 08:47:58.548307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.744 [2024-09-28 08:47:58.733712] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:21.311 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:10:21.569 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:10:21.569 [2024-09-28 08:47:59.498029] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:10:21.569 [2024-09-28 08:47:59.498127] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:22.504 [2024-09-28 08:48:00.218946] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:22.764 08:48:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:10:22.764 08:48:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:22.764 08:48:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:10:22.764 08:48:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:10:22.764 08:48:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:10:22.764 08:48:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:22.764 00:10:22.764 real 0m2.616s 00:10:22.764 user 0m1.782s 00:10:22.764 sys 0m0.717s 00:10:22.764 08:48:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.764 ************************************ 00:10:22.764 END TEST dd_smaller_blocksize 00:10:22.764 ************************************ 00:10:22.764 08:48:00 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:10:22.764 08:48:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:10:22.764 08:48:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:22.764 08:48:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.764 08:48:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:22.764 ************************************ 00:10:22.764 START TEST dd_invalid_count 00:10:22.764 ************************************ 00:10:22.764 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:10:22.765 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:22.765 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:10:22.765 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:22.765 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:22.765 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:22.765 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:22.765 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:22.765 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:22.765 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:22.765 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:22.765 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:22.765 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:10:23.023 [2024-09-28 08:48:00.800103] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:23.023 00:10:23.023 real 0m0.151s 00:10:23.023 user 0m0.074s 00:10:23.023 sys 0m0.076s 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:10:23.023 ************************************ 00:10:23.023 END TEST dd_invalid_count 00:10:23.023 ************************************ 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:23.023 ************************************ 00:10:23.023 START TEST dd_invalid_oflag 00:10:23.023 ************************************ 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:23.023 08:48:00 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:23.023 [2024-09-28 08:48:01.002164] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:23.281 00:10:23.281 real 0m0.152s 00:10:23.281 user 0m0.086s 00:10:23.281 sys 0m0.065s 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:10:23.281 ************************************ 00:10:23.281 END TEST dd_invalid_oflag 00:10:23.281 ************************************ 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:23.281 ************************************ 00:10:23.281 START TEST dd_invalid_iflag 00:10:23.281 ************************************ 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:23.281 [2024-09-28 08:48:01.213743] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:23.281 00:10:23.281 real 0m0.171s 00:10:23.281 user 0m0.092s 00:10:23.281 sys 0m0.076s 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:23.281 08:48:01 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:10:23.281 ************************************ 00:10:23.281 END TEST dd_invalid_iflag 00:10:23.281 ************************************ 00:10:23.538 08:48:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:10:23.538 08:48:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:23.538 08:48:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:23.538 08:48:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:23.538 ************************************ 00:10:23.538 START TEST dd_unknown_flag 00:10:23.538 ************************************ 00:10:23.538 08:48:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:10:23.538 08:48:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:23.538 08:48:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:10:23.538 08:48:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:23.538 08:48:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.538 08:48:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.538 08:48:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.538 08:48:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.538 08:48:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.538 08:48:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.539 08:48:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:23.539 08:48:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:23.539 08:48:01 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:23.539 [2024-09-28 08:48:01.425952] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:23.539 [2024-09-28 08:48:01.426092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64257 ] 00:10:23.796 [2024-09-28 08:48:01.591479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.052 [2024-09-28 08:48:01.822799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.052 [2024-09-28 08:48:01.995560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:24.310 [2024-09-28 08:48:02.080903] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:10:24.310 [2024-09-28 08:48:02.080976] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:24.310 [2024-09-28 08:48:02.081049] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:10:24.310 [2024-09-28 08:48:02.081072] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:24.310 [2024-09-28 08:48:02.081435] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:10:24.310 [2024-09-28 08:48:02.081466] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:24.310 [2024-09-28 08:48:02.081527] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:24.310 [2024-09-28 08:48:02.081545] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:24.876 [2024-09-28 08:48:02.701718] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:25.136 ************************************ 00:10:25.136 END TEST dd_unknown_flag 00:10:25.136 ************************************ 00:10:25.136 08:48:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:10:25.136 08:48:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:25.136 08:48:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:10:25.136 08:48:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:10:25.136 08:48:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:10:25.136 08:48:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:25.136 00:10:25.136 real 0m1.776s 00:10:25.136 user 0m1.461s 00:10:25.136 sys 0m0.209s 00:10:25.136 08:48:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.136 08:48:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:25.395 ************************************ 00:10:25.395 START TEST dd_invalid_json 00:10:25.395 ************************************ 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:25.395 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:25.395 [2024-09-28 08:48:03.266111] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:25.395 [2024-09-28 08:48:03.266280] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64298 ] 00:10:25.654 [2024-09-28 08:48:03.430264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.654 [2024-09-28 08:48:03.587834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.654 [2024-09-28 08:48:03.587952] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:10:25.654 [2024-09-28 08:48:03.587975] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:25.654 [2024-09-28 08:48:03.587989] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:25.654 [2024-09-28 08:48:03.588057] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:26.221 ************************************ 00:10:26.221 END TEST dd_invalid_json 00:10:26.221 ************************************ 00:10:26.221 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:10:26.221 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:26.221 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:10:26.221 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:10:26.221 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:10:26.221 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:26.221 00:10:26.221 real 0m0.825s 00:10:26.221 user 0m0.584s 00:10:26.221 sys 0m0.137s 00:10:26.221 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.221 08:48:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:10:26.221 08:48:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:10:26.221 08:48:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:26.221 08:48:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.221 08:48:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:26.221 ************************************ 00:10:26.221 START TEST dd_invalid_seek 00:10:26.221 ************************************ 00:10:26.221 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:10:26.221 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:26.221 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:26.221 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:26.222 08:48:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:10:26.222 { 00:10:26.222 "subsystems": [ 00:10:26.222 { 00:10:26.222 "subsystem": "bdev", 00:10:26.222 "config": [ 00:10:26.222 { 00:10:26.222 "params": { 00:10:26.222 "block_size": 512, 00:10:26.222 "num_blocks": 512, 00:10:26.222 "name": "malloc0" 00:10:26.222 }, 00:10:26.222 "method": "bdev_malloc_create" 00:10:26.222 }, 00:10:26.222 { 00:10:26.222 "params": { 00:10:26.222 "block_size": 512, 00:10:26.222 "num_blocks": 512, 00:10:26.222 "name": "malloc1" 00:10:26.222 }, 00:10:26.222 "method": "bdev_malloc_create" 00:10:26.222 }, 00:10:26.222 { 00:10:26.222 "method": "bdev_wait_for_examine" 00:10:26.222 } 00:10:26.222 ] 00:10:26.222 } 00:10:26.222 ] 00:10:26.222 } 00:10:26.222 [2024-09-28 08:48:04.170250] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:26.222 [2024-09-28 08:48:04.170413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64334 ] 00:10:26.480 [2024-09-28 08:48:04.339607] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.739 [2024-09-28 08:48:04.503235] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.739 [2024-09-28 08:48:04.647205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:26.998 [2024-09-28 08:48:04.762181] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:10:26.998 [2024-09-28 08:48:04.762277] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:27.567 [2024-09-28 08:48:05.363352] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:27.826 00:10:27.826 real 0m1.734s 00:10:27.826 user 0m1.480s 00:10:27.826 sys 0m0.237s 00:10:27.826 ************************************ 00:10:27.826 END TEST dd_invalid_seek 00:10:27.826 ************************************ 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:27.826 ************************************ 00:10:27.826 START TEST dd_invalid_skip 00:10:27.826 ************************************ 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:27.826 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:27.827 08:48:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:28.084 { 00:10:28.084 "subsystems": [ 00:10:28.084 { 00:10:28.084 "subsystem": "bdev", 00:10:28.084 "config": [ 00:10:28.084 { 00:10:28.084 "params": { 00:10:28.084 "block_size": 512, 00:10:28.084 "num_blocks": 512, 00:10:28.084 "name": "malloc0" 00:10:28.084 }, 00:10:28.084 "method": "bdev_malloc_create" 00:10:28.084 }, 00:10:28.084 { 00:10:28.084 "params": { 00:10:28.084 "block_size": 512, 00:10:28.084 "num_blocks": 512, 00:10:28.084 "name": "malloc1" 00:10:28.084 }, 00:10:28.084 "method": "bdev_malloc_create" 00:10:28.084 }, 00:10:28.084 { 00:10:28.084 "method": "bdev_wait_for_examine" 00:10:28.084 } 00:10:28.084 ] 00:10:28.084 } 00:10:28.084 ] 00:10:28.084 } 00:10:28.084 [2024-09-28 08:48:05.924438] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:28.084 [2024-09-28 08:48:05.924600] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64379 ] 00:10:28.341 [2024-09-28 08:48:06.093376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.341 [2024-09-28 08:48:06.279944] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.599 [2024-09-28 08:48:06.460055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:28.599 [2024-09-28 08:48:06.585271] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:10:28.599 [2024-09-28 08:48:06.585345] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:29.575 [2024-09-28 08:48:07.302923] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:29.834 00:10:29.834 real 0m1.896s 00:10:29.834 user 0m1.599s 00:10:29.834 sys 0m0.242s 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.834 ************************************ 00:10:29.834 END TEST dd_invalid_skip 00:10:29.834 ************************************ 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:29.834 ************************************ 00:10:29.834 START TEST dd_invalid_input_count 00:10:29.834 ************************************ 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:29.834 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:29.835 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:29.835 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:29.835 08:48:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:29.835 { 00:10:29.835 "subsystems": [ 00:10:29.835 { 00:10:29.835 "subsystem": "bdev", 00:10:29.835 "config": [ 00:10:29.835 { 00:10:29.835 "params": { 00:10:29.835 "block_size": 512, 00:10:29.835 "num_blocks": 512, 00:10:29.835 "name": "malloc0" 00:10:29.835 }, 00:10:29.835 "method": "bdev_malloc_create" 00:10:29.835 }, 00:10:29.835 { 00:10:29.835 "params": { 00:10:29.835 "block_size": 512, 00:10:29.835 "num_blocks": 512, 00:10:29.835 "name": "malloc1" 00:10:29.835 }, 00:10:29.835 "method": "bdev_malloc_create" 00:10:29.835 }, 00:10:29.835 { 00:10:29.835 "method": "bdev_wait_for_examine" 00:10:29.835 } 00:10:29.835 ] 00:10:29.835 } 00:10:29.835 ] 00:10:29.835 } 00:10:30.093 [2024-09-28 08:48:07.887753] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:30.093 [2024-09-28 08:48:07.888238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64431 ] 00:10:30.093 [2024-09-28 08:48:08.061055] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.352 [2024-09-28 08:48:08.269440] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.611 [2024-09-28 08:48:08.447660] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:30.611 [2024-09-28 08:48:08.572416] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:10:30.611 [2024-09-28 08:48:08.572490] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:31.546 [2024-09-28 08:48:09.286637] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:31.804 00:10:31.804 real 0m1.958s 00:10:31.804 user 0m1.671s 00:10:31.804 sys 0m0.230s 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:10:31.804 ************************************ 00:10:31.804 END TEST dd_invalid_input_count 00:10:31.804 ************************************ 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:31.804 ************************************ 00:10:31.804 START TEST dd_invalid_output_count 00:10:31.804 ************************************ 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:31.804 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.805 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:10:31.805 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:10:31.805 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:10:31.805 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:31.805 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.805 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:31.805 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.805 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:31.805 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:31.805 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:31.805 08:48:09 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:32.063 { 00:10:32.063 "subsystems": [ 00:10:32.063 { 00:10:32.063 "subsystem": "bdev", 00:10:32.063 "config": [ 00:10:32.063 { 00:10:32.063 "params": { 00:10:32.063 "block_size": 512, 00:10:32.063 "num_blocks": 512, 00:10:32.063 "name": "malloc0" 00:10:32.063 }, 00:10:32.063 "method": "bdev_malloc_create" 00:10:32.063 }, 00:10:32.063 { 00:10:32.063 "method": "bdev_wait_for_examine" 00:10:32.063 } 00:10:32.063 ] 00:10:32.063 } 00:10:32.063 ] 00:10:32.063 } 00:10:32.063 [2024-09-28 08:48:09.889705] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:32.063 [2024-09-28 08:48:09.889935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64476 ] 00:10:32.322 [2024-09-28 08:48:10.073147] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.322 [2024-09-28 08:48:10.256920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.581 [2024-09-28 08:48:10.434377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:32.581 [2024-09-28 08:48:10.550182] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:10:32.581 [2024-09-28 08:48:10.550486] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:33.516 [2024-09-28 08:48:11.257967] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:33.775 08:48:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:33.776 ************************************ 00:10:33.776 END TEST dd_invalid_output_count 00:10:33.776 ************************************ 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:33.776 00:10:33.776 real 0m1.924s 00:10:33.776 user 0m1.625s 00:10:33.776 sys 0m0.239s 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:33.776 ************************************ 00:10:33.776 START TEST dd_bs_not_multiple 00:10:33.776 ************************************ 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:33.776 08:48:11 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:34.035 { 00:10:34.035 "subsystems": [ 00:10:34.035 { 00:10:34.035 "subsystem": "bdev", 00:10:34.035 "config": [ 00:10:34.035 { 00:10:34.035 "params": { 00:10:34.035 "block_size": 512, 00:10:34.035 "num_blocks": 512, 00:10:34.035 "name": "malloc0" 00:10:34.035 }, 00:10:34.035 "method": "bdev_malloc_create" 00:10:34.035 }, 00:10:34.035 { 00:10:34.035 "params": { 00:10:34.035 "block_size": 512, 00:10:34.035 "num_blocks": 512, 00:10:34.035 "name": "malloc1" 00:10:34.035 }, 00:10:34.035 "method": "bdev_malloc_create" 00:10:34.035 }, 00:10:34.035 { 00:10:34.035 "method": "bdev_wait_for_examine" 00:10:34.035 } 00:10:34.035 ] 00:10:34.035 } 00:10:34.035 ] 00:10:34.035 } 00:10:34.035 [2024-09-28 08:48:11.824852] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:34.035 [2024-09-28 08:48:11.825002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64525 ] 00:10:34.035 [2024-09-28 08:48:11.984383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.294 [2024-09-28 08:48:12.168217] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.553 [2024-09-28 08:48:12.345666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:34.553 [2024-09-28 08:48:12.469606] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:10:34.553 [2024-09-28 08:48:12.469684] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:35.491 [2024-09-28 08:48:13.182519] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:35.749 08:48:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:10:35.749 08:48:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:35.749 08:48:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:10:35.749 08:48:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:10:35.750 08:48:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:10:35.750 08:48:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:35.750 ************************************ 00:10:35.750 END TEST dd_bs_not_multiple 00:10:35.750 ************************************ 00:10:35.750 00:10:35.750 real 0m1.875s 00:10:35.750 user 0m1.612s 00:10:35.750 sys 0m0.212s 00:10:35.750 08:48:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.750 08:48:13 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:10:35.750 ************************************ 00:10:35.750 END TEST spdk_dd_negative 00:10:35.750 ************************************ 00:10:35.750 00:10:35.750 real 0m17.048s 00:10:35.750 user 0m12.961s 00:10:35.750 sys 0m3.465s 00:10:35.750 08:48:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.750 08:48:13 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:35.750 ************************************ 00:10:35.750 END TEST spdk_dd 00:10:35.750 ************************************ 00:10:35.750 00:10:35.750 real 3m2.365s 00:10:35.750 user 2m28.761s 00:10:35.750 sys 1m2.360s 00:10:35.750 08:48:13 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.750 08:48:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:35.750 08:48:13 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:35.750 08:48:13 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:10:35.750 08:48:13 -- spdk/autotest.sh@256 -- # timing_exit lib 00:10:35.750 08:48:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:35.750 08:48:13 -- common/autotest_common.sh@10 -- # set +x 00:10:35.750 08:48:13 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:10:35.750 08:48:13 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:10:35.750 08:48:13 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:10:35.750 08:48:13 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:10:35.750 08:48:13 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:10:36.008 08:48:13 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:10:36.008 08:48:13 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:36.008 08:48:13 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.008 08:48:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.008 08:48:13 -- common/autotest_common.sh@10 -- # set +x 00:10:36.008 ************************************ 00:10:36.008 START TEST nvmf_tcp 00:10:36.008 ************************************ 00:10:36.008 08:48:13 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:36.008 * Looking for test storage... 00:10:36.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:36.008 08:48:13 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:36.008 08:48:13 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:10:36.008 08:48:13 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:36.008 08:48:13 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:36.008 08:48:13 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.009 08:48:13 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:10:36.009 08:48:13 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.009 08:48:13 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:36.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.009 --rc genhtml_branch_coverage=1 00:10:36.009 --rc genhtml_function_coverage=1 00:10:36.009 --rc genhtml_legend=1 00:10:36.009 --rc geninfo_all_blocks=1 00:10:36.009 --rc geninfo_unexecuted_blocks=1 00:10:36.009 00:10:36.009 ' 00:10:36.009 08:48:13 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:36.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.009 --rc genhtml_branch_coverage=1 00:10:36.009 --rc genhtml_function_coverage=1 00:10:36.009 --rc genhtml_legend=1 00:10:36.009 --rc geninfo_all_blocks=1 00:10:36.009 --rc geninfo_unexecuted_blocks=1 00:10:36.009 00:10:36.009 ' 00:10:36.009 08:48:13 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:36.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.009 --rc genhtml_branch_coverage=1 00:10:36.009 --rc genhtml_function_coverage=1 00:10:36.009 --rc genhtml_legend=1 00:10:36.009 --rc geninfo_all_blocks=1 00:10:36.009 --rc geninfo_unexecuted_blocks=1 00:10:36.009 00:10:36.009 ' 00:10:36.009 08:48:13 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:36.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.009 --rc genhtml_branch_coverage=1 00:10:36.009 --rc genhtml_function_coverage=1 00:10:36.009 --rc genhtml_legend=1 00:10:36.009 --rc geninfo_all_blocks=1 00:10:36.009 --rc geninfo_unexecuted_blocks=1 00:10:36.009 00:10:36.009 ' 00:10:36.009 08:48:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:36.009 08:48:13 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:36.009 08:48:13 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:36.009 08:48:13 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.009 08:48:13 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.009 08:48:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:36.009 ************************************ 00:10:36.009 START TEST nvmf_target_core 00:10:36.009 ************************************ 00:10:36.009 08:48:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:36.268 * Looking for test storage... 00:10:36.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:10:36.268 08:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:36.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.269 --rc genhtml_branch_coverage=1 00:10:36.269 --rc genhtml_function_coverage=1 00:10:36.269 --rc genhtml_legend=1 00:10:36.269 --rc geninfo_all_blocks=1 00:10:36.269 --rc geninfo_unexecuted_blocks=1 00:10:36.269 00:10:36.269 ' 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:36.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.269 --rc genhtml_branch_coverage=1 00:10:36.269 --rc genhtml_function_coverage=1 00:10:36.269 --rc genhtml_legend=1 00:10:36.269 --rc geninfo_all_blocks=1 00:10:36.269 --rc geninfo_unexecuted_blocks=1 00:10:36.269 00:10:36.269 ' 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:36.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.269 --rc genhtml_branch_coverage=1 00:10:36.269 --rc genhtml_function_coverage=1 00:10:36.269 --rc genhtml_legend=1 00:10:36.269 --rc geninfo_all_blocks=1 00:10:36.269 --rc geninfo_unexecuted_blocks=1 00:10:36.269 00:10:36.269 ' 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:36.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.269 --rc genhtml_branch_coverage=1 00:10:36.269 --rc genhtml_function_coverage=1 00:10:36.269 --rc genhtml_legend=1 00:10:36.269 --rc geninfo_all_blocks=1 00:10:36.269 --rc geninfo_unexecuted_blocks=1 00:10:36.269 00:10:36.269 ' 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.269 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.269 ************************************ 00:10:36.269 START TEST nvmf_host_management 00:10:36.269 ************************************ 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:36.269 * Looking for test storage... 00:10:36.269 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:36.269 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.530 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:36.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.531 --rc genhtml_branch_coverage=1 00:10:36.531 --rc genhtml_function_coverage=1 00:10:36.531 --rc genhtml_legend=1 00:10:36.531 --rc geninfo_all_blocks=1 00:10:36.531 --rc geninfo_unexecuted_blocks=1 00:10:36.531 00:10:36.531 ' 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:36.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.531 --rc genhtml_branch_coverage=1 00:10:36.531 --rc genhtml_function_coverage=1 00:10:36.531 --rc genhtml_legend=1 00:10:36.531 --rc geninfo_all_blocks=1 00:10:36.531 --rc geninfo_unexecuted_blocks=1 00:10:36.531 00:10:36.531 ' 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:36.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.531 --rc genhtml_branch_coverage=1 00:10:36.531 --rc genhtml_function_coverage=1 00:10:36.531 --rc genhtml_legend=1 00:10:36.531 --rc geninfo_all_blocks=1 00:10:36.531 --rc geninfo_unexecuted_blocks=1 00:10:36.531 00:10:36.531 ' 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:36.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.531 --rc genhtml_branch_coverage=1 00:10:36.531 --rc genhtml_function_coverage=1 00:10:36.531 --rc genhtml_legend=1 00:10:36.531 --rc geninfo_all_blocks=1 00:10:36.531 --rc geninfo_unexecuted_blocks=1 00:10:36.531 00:10:36.531 ' 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.531 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.532 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:36.532 Cannot find device "nvmf_init_br" 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:36.532 Cannot find device "nvmf_init_br2" 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:36.532 Cannot find device "nvmf_tgt_br" 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:36.532 Cannot find device "nvmf_tgt_br2" 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:36.532 Cannot find device "nvmf_init_br" 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:36.532 Cannot find device "nvmf_init_br2" 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:36.532 Cannot find device "nvmf_tgt_br" 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:36.532 Cannot find device "nvmf_tgt_br2" 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:36.532 Cannot find device "nvmf_br" 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:36.532 Cannot find device "nvmf_init_if" 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:36.532 Cannot find device "nvmf_init_if2" 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:36.532 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.532 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:10:36.533 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:36.533 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:36.533 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:10:36.533 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:36.533 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:36.792 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:37.050 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:37.050 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:37.050 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:37.050 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:37.050 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:37.050 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:37.050 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:10:37.050 00:10:37.050 --- 10.0.0.3 ping statistics --- 00:10:37.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.050 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:10:37.050 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:37.050 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:37.050 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:10:37.050 00:10:37.050 --- 10.0.0.4 ping statistics --- 00:10:37.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.050 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:37.050 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:37.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:10:37.050 00:10:37.050 --- 10.0.0.1 ping statistics --- 00:10:37.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.051 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:37.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:10:37.051 00:10:37.051 --- 10.0.0.2 ping statistics --- 00:10:37.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.051 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=64883 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 64883 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64883 ']' 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.051 08:48:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:37.051 [2024-09-28 08:48:14.980035] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:37.051 [2024-09-28 08:48:14.980221] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.308 [2024-09-28 08:48:15.155418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.566 [2024-09-28 08:48:15.349118] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.566 [2024-09-28 08:48:15.349181] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.566 [2024-09-28 08:48:15.349202] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.566 [2024-09-28 08:48:15.349216] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.566 [2024-09-28 08:48:15.349230] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.566 [2024-09-28 08:48:15.349417] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.566 [2024-09-28 08:48:15.350031] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.566 [2024-09-28 08:48:15.350136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.566 [2024-09-28 08:48:15.350147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:10:37.566 [2024-09-28 08:48:15.532546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:38.132 08:48:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.132 08:48:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:38.132 08:48:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:38.132 08:48:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:38.132 08:48:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:38.132 08:48:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.132 08:48:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:38.132 08:48:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.132 08:48:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:38.132 [2024-09-28 08:48:15.999637] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.132 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.132 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:38.132 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:38.132 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:38.132 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:38.132 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:38.132 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:38.132 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.132 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:38.132 Malloc0 00:10:38.132 [2024-09-28 08:48:16.103748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:38.132 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.132 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:38.132 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:38.132 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64943 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64943 /var/tmp/bdevperf.sock 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 64943 ']' 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:38.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:38.391 { 00:10:38.391 "params": { 00:10:38.391 "name": "Nvme$subsystem", 00:10:38.391 "trtype": "$TEST_TRANSPORT", 00:10:38.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:38.391 "adrfam": "ipv4", 00:10:38.391 "trsvcid": "$NVMF_PORT", 00:10:38.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:38.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:38.391 "hdgst": ${hdgst:-false}, 00:10:38.391 "ddgst": ${ddgst:-false} 00:10:38.391 }, 00:10:38.391 "method": "bdev_nvme_attach_controller" 00:10:38.391 } 00:10:38.391 EOF 00:10:38.391 )") 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:10:38.391 08:48:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:38.391 "params": { 00:10:38.391 "name": "Nvme0", 00:10:38.391 "trtype": "tcp", 00:10:38.391 "traddr": "10.0.0.3", 00:10:38.391 "adrfam": "ipv4", 00:10:38.391 "trsvcid": "4420", 00:10:38.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:38.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:38.391 "hdgst": false, 00:10:38.391 "ddgst": false 00:10:38.391 }, 00:10:38.391 "method": "bdev_nvme_attach_controller" 00:10:38.391 }' 00:10:38.391 [2024-09-28 08:48:16.241840] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:38.391 [2024-09-28 08:48:16.242034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64943 ] 00:10:38.650 [2024-09-28 08:48:16.396848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.650 [2024-09-28 08:48:16.569798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.909 [2024-09-28 08:48:16.749275] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.168 Running I/O for 10 seconds... 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.429 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:39.429 [2024-09-28 08:48:17.342670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.342768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.342788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.342832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.342848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.342863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.342876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.342894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.342907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.342921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.342933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.342951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.342964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.342978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.342990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.429 [2024-09-28 08:48:17.343088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.429 [2024-09-28 08:48:17.343129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.429 [2024-09-28 08:48:17.343141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.429 [2024-09-28 08:48:17.343155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same [2024-09-28 08:48:17.343168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nswith the state(6) to be set 00:10:39.429 id:0 cdw10:00000000 cdw11:00000000 00:10:39.429 [2024-09-28 08:48:17.343182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.429 [2024-09-28 08:48:17.343185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:39.429 [2024-09-28 08:48:17.343212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.429 [2024-09-28 08:48:17.343226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.429 [2024-09-28 08:48:17.343309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:10:39.430 [2024-09-28 08:48:17.343758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.343783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.343849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.343867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.343884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.343898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.343913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.343926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.343941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.343954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.343969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.343982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.343997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.430 [2024-09-28 08:48:17.344468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.430 [2024-09-28 08:48:17.344480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.344977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.344990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.431 [2024-09-28 08:48:17.345443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.431 [2024-09-28 08:48:17.345472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.431 [2024-09-28 08:48:17.345485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.432 [2024-09-28 08:48:17.345500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.432 [2024-09-28 08:48:17.345515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.432 [2024-09-28 08:48:17.345531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.432 [2024-09-28 08:48:17.345544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.432 [2024-09-28 08:48:17.345559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.432 [2024-09-28 08:48:17.345572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.432 [2024-09-28 08:48:17.345587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.432 [2024-09-28 08:48:17.345600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.432 [2024-09-28 08:48:17.345617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.432 [2024-09-28 08:48:17.345630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.432 [2024-09-28 08:48:17.345646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.432 [2024-09-28 08:48:17.345658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.432 [2024-09-28 08:48:17.345673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.432 [2024-09-28 08:48:17.345686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.432 [2024-09-28 08:48:17.345702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:39.432 [2024-09-28 08:48:17.345715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:39.432 [2024-09-28 08:48:17.345728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:10:39.432 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:39.432 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.432 [2024-09-28 08:48:17.346007] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:10:39.432 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:39.432 [2024-09-28 08:48:17.347372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:39.432 task offset: 65536 on job bdev=Nvme0n1 fails 00:10:39.432 00:10:39.432 Latency(us) 00:10:39.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.432 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:39.432 Job: Nvme0n1 ended in about 0.41 seconds with error 00:10:39.432 Verification LBA range: start 0x0 length 0x400 00:10:39.432 Nvme0n1 : 0.41 1242.58 77.66 155.32 0.00 44338.86 4259.84 40989.79 00:10:39.432 =================================================================================================================== 00:10:39.432 Total : 1242.58 77.66 155.32 0.00 44338.86 4259.84 40989.79 00:10:39.432 [2024-09-28 08:48:17.352610] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:39.432 [2024-09-28 08:48:17.352670] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:10:39.432 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.432 08:48:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:39.432 [2024-09-28 08:48:17.357599] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:40.366 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64943 00:10:40.366 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64943) - No such process 00:10:40.366 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:40.366 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:40.625 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:40.625 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:40.625 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:10:40.625 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:10:40.625 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:40.625 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:40.625 { 00:10:40.625 "params": { 00:10:40.625 "name": "Nvme$subsystem", 00:10:40.625 "trtype": "$TEST_TRANSPORT", 00:10:40.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:40.625 "adrfam": "ipv4", 00:10:40.625 "trsvcid": "$NVMF_PORT", 00:10:40.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:40.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:40.625 "hdgst": ${hdgst:-false}, 00:10:40.625 "ddgst": ${ddgst:-false} 00:10:40.625 }, 00:10:40.625 "method": "bdev_nvme_attach_controller" 00:10:40.625 } 00:10:40.625 EOF 00:10:40.625 )") 00:10:40.625 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:10:40.625 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:10:40.625 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:10:40.625 08:48:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:40.625 "params": { 00:10:40.625 "name": "Nvme0", 00:10:40.625 "trtype": "tcp", 00:10:40.625 "traddr": "10.0.0.3", 00:10:40.625 "adrfam": "ipv4", 00:10:40.625 "trsvcid": "4420", 00:10:40.625 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:40.625 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:40.625 "hdgst": false, 00:10:40.625 "ddgst": false 00:10:40.625 }, 00:10:40.625 "method": "bdev_nvme_attach_controller" 00:10:40.625 }' 00:10:40.625 [2024-09-28 08:48:18.476523] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:40.625 [2024-09-28 08:48:18.477350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64982 ] 00:10:40.883 [2024-09-28 08:48:18.669004] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.883 [2024-09-28 08:48:18.842694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.140 [2024-09-28 08:48:19.025542] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:41.425 Running I/O for 1 seconds... 00:10:42.369 1344.00 IOPS, 84.00 MiB/s 00:10:42.369 Latency(us) 00:10:42.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.369 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:42.369 Verification LBA range: start 0x0 length 0x400 00:10:42.369 Nvme0n1 : 1.01 1398.18 87.39 0.00 0.00 44931.39 7685.59 39559.91 00:10:42.369 =================================================================================================================== 00:10:42.369 Total : 1398.18 87.39 0.00 0.00 44931.39 7685.59 39559.91 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.744 rmmod nvme_tcp 00:10:43.744 rmmod nvme_fabrics 00:10:43.744 rmmod nvme_keyring 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 64883 ']' 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 64883 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 64883 ']' 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 64883 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64883 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:43.744 killing process with pid 64883 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64883' 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 64883 00:10:43.744 08:48:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 64883 00:10:44.678 [2024-09-28 08:48:22.614433] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.936 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.195 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:10:45.195 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:45.195 00:10:45.195 real 0m8.789s 00:10:45.195 user 0m32.853s 00:10:45.195 sys 0m1.748s 00:10:45.195 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.195 08:48:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:45.195 ************************************ 00:10:45.195 END TEST nvmf_host_management 00:10:45.195 ************************************ 00:10:45.195 08:48:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:45.195 08:48:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:45.195 08:48:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.195 08:48:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.195 ************************************ 00:10:45.195 START TEST nvmf_lvol 00:10:45.195 ************************************ 00:10:45.195 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:45.195 * Looking for test storage... 00:10:45.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:45.195 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:45.195 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:10:45.195 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:45.195 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:45.195 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.195 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.195 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.195 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.195 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.454 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.454 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.454 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:45.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.455 --rc genhtml_branch_coverage=1 00:10:45.455 --rc genhtml_function_coverage=1 00:10:45.455 --rc genhtml_legend=1 00:10:45.455 --rc geninfo_all_blocks=1 00:10:45.455 --rc geninfo_unexecuted_blocks=1 00:10:45.455 00:10:45.455 ' 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:45.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.455 --rc genhtml_branch_coverage=1 00:10:45.455 --rc genhtml_function_coverage=1 00:10:45.455 --rc genhtml_legend=1 00:10:45.455 --rc geninfo_all_blocks=1 00:10:45.455 --rc geninfo_unexecuted_blocks=1 00:10:45.455 00:10:45.455 ' 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:45.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.455 --rc genhtml_branch_coverage=1 00:10:45.455 --rc genhtml_function_coverage=1 00:10:45.455 --rc genhtml_legend=1 00:10:45.455 --rc geninfo_all_blocks=1 00:10:45.455 --rc geninfo_unexecuted_blocks=1 00:10:45.455 00:10:45.455 ' 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:45.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.455 --rc genhtml_branch_coverage=1 00:10:45.455 --rc genhtml_function_coverage=1 00:10:45.455 --rc genhtml_legend=1 00:10:45.455 --rc geninfo_all_blocks=1 00:10:45.455 --rc geninfo_unexecuted_blocks=1 00:10:45.455 00:10:45.455 ' 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:45.455 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:45.455 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:45.456 Cannot find device "nvmf_init_br" 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:45.456 Cannot find device "nvmf_init_br2" 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:45.456 Cannot find device "nvmf_tgt_br" 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:45.456 Cannot find device "nvmf_tgt_br2" 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:45.456 Cannot find device "nvmf_init_br" 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:45.456 Cannot find device "nvmf_init_br2" 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:45.456 Cannot find device "nvmf_tgt_br" 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:45.456 Cannot find device "nvmf_tgt_br2" 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:45.456 Cannot find device "nvmf_br" 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:45.456 Cannot find device "nvmf_init_if" 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:45.456 Cannot find device "nvmf_init_if2" 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:45.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:45.456 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:45.456 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:45.715 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:45.715 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:10:45.715 00:10:45.715 --- 10.0.0.3 ping statistics --- 00:10:45.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.715 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:45.715 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:45.715 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:10:45.715 00:10:45.715 --- 10.0.0.4 ping statistics --- 00:10:45.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.715 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:45.715 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:45.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:10:45.715 00:10:45.715 --- 10.0.0.1 ping statistics --- 00:10:45.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.716 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:45.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:10:45.716 00:10:45.716 --- 10.0.0.2 ping statistics --- 00:10:45.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.716 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=65280 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 65280 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 65280 ']' 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:45.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.716 08:48:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:45.974 [2024-09-28 08:48:23.766195] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:45.975 [2024-09-28 08:48:23.766372] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.975 [2024-09-28 08:48:23.943068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:46.233 [2024-09-28 08:48:24.173480] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.233 [2024-09-28 08:48:24.173555] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.233 [2024-09-28 08:48:24.173580] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.233 [2024-09-28 08:48:24.173592] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.233 [2024-09-28 08:48:24.173604] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.233 [2024-09-28 08:48:24.173872] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.233 [2024-09-28 08:48:24.174145] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.233 [2024-09-28 08:48:24.174162] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.492 [2024-09-28 08:48:24.349682] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:47.059 08:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:47.059 08:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:10:47.059 08:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:47.059 08:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:47.059 08:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:47.059 08:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.059 08:48:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:47.059 [2024-09-28 08:48:25.015107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.059 08:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.626 08:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:47.626 08:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.884 08:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:47.884 08:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:48.143 08:48:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:48.401 08:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0aada114-ea35-4550-8af9-8e59032fdd1a 00:10:48.401 08:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0aada114-ea35-4550-8af9-8e59032fdd1a lvol 20 00:10:48.660 08:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3b601f76-2dc8-47ee-842b-ae59b844ac70 00:10:48.660 08:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:48.919 08:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3b601f76-2dc8-47ee-842b-ae59b844ac70 00:10:49.178 08:48:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:49.436 [2024-09-28 08:48:27.198592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:49.436 08:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:49.695 08:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:49.695 08:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65361 00:10:49.695 08:48:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:50.631 08:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 3b601f76-2dc8-47ee-842b-ae59b844ac70 MY_SNAPSHOT 00:10:50.889 08:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=92afcf64-c28a-4ec9-b6e5-e2af170f14a3 00:10:50.889 08:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 3b601f76-2dc8-47ee-842b-ae59b844ac70 30 00:10:51.148 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 92afcf64-c28a-4ec9-b6e5-e2af170f14a3 MY_CLONE 00:10:51.736 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=844841f1-925a-4442-9a43-cbf4b0ad87a9 00:10:51.736 08:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 844841f1-925a-4442-9a43-cbf4b0ad87a9 00:10:52.309 08:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65361 00:11:00.424 Initializing NVMe Controllers 00:11:00.424 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:11:00.424 Controller IO queue size 128, less than required. 00:11:00.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:00.424 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:00.424 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:00.424 Initialization complete. Launching workers. 00:11:00.424 ======================================================== 00:11:00.424 Latency(us) 00:11:00.424 Device Information : IOPS MiB/s Average min max 00:11:00.424 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9250.00 36.13 13851.20 504.79 171003.16 00:11:00.424 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9119.50 35.62 14046.64 3922.34 129842.68 00:11:00.424 ======================================================== 00:11:00.424 Total : 18369.50 71.76 13948.23 504.79 171003.16 00:11:00.424 00:11:00.424 08:48:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:00.424 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3b601f76-2dc8-47ee-842b-ae59b844ac70 00:11:00.424 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0aada114-ea35-4550-8af9-8e59032fdd1a 00:11:00.682 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:00.682 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:00.682 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:00.682 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:00.682 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:11:00.682 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.682 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:11:00.682 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.682 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.682 rmmod nvme_tcp 00:11:00.682 rmmod nvme_fabrics 00:11:00.682 rmmod nvme_keyring 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 65280 ']' 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 65280 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 65280 ']' 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 65280 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65280 00:11:00.941 killing process with pid 65280 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65280' 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 65280 00:11:00.941 08:48:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 65280 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:11:02.318 00:11:02.318 real 0m17.249s 00:11:02.318 user 1m8.530s 00:11:02.318 sys 0m3.907s 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:02.318 ************************************ 00:11:02.318 END TEST nvmf_lvol 00:11:02.318 ************************************ 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.318 08:48:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.578 ************************************ 00:11:02.578 START TEST nvmf_lvs_grow 00:11:02.578 ************************************ 00:11:02.578 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:02.578 * Looking for test storage... 00:11:02.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:02.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.579 --rc genhtml_branch_coverage=1 00:11:02.579 --rc genhtml_function_coverage=1 00:11:02.579 --rc genhtml_legend=1 00:11:02.579 --rc geninfo_all_blocks=1 00:11:02.579 --rc geninfo_unexecuted_blocks=1 00:11:02.579 00:11:02.579 ' 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:02.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.579 --rc genhtml_branch_coverage=1 00:11:02.579 --rc genhtml_function_coverage=1 00:11:02.579 --rc genhtml_legend=1 00:11:02.579 --rc geninfo_all_blocks=1 00:11:02.579 --rc geninfo_unexecuted_blocks=1 00:11:02.579 00:11:02.579 ' 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:02.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.579 --rc genhtml_branch_coverage=1 00:11:02.579 --rc genhtml_function_coverage=1 00:11:02.579 --rc genhtml_legend=1 00:11:02.579 --rc geninfo_all_blocks=1 00:11:02.579 --rc geninfo_unexecuted_blocks=1 00:11:02.579 00:11:02.579 ' 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:02.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.579 --rc genhtml_branch_coverage=1 00:11:02.579 --rc genhtml_function_coverage=1 00:11:02.579 --rc genhtml_legend=1 00:11:02.579 --rc geninfo_all_blocks=1 00:11:02.579 --rc geninfo_unexecuted_blocks=1 00:11:02.579 00:11:02.579 ' 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.579 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:02.579 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:02.580 Cannot find device "nvmf_init_br" 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:02.580 Cannot find device "nvmf_init_br2" 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:02.580 Cannot find device "nvmf_tgt_br" 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:11:02.580 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.839 Cannot find device "nvmf_tgt_br2" 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:02.839 Cannot find device "nvmf_init_br" 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:02.839 Cannot find device "nvmf_init_br2" 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:02.839 Cannot find device "nvmf_tgt_br" 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:02.839 Cannot find device "nvmf_tgt_br2" 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:02.839 Cannot find device "nvmf_br" 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:02.839 Cannot find device "nvmf_init_if" 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:02.839 Cannot find device "nvmf_init_if2" 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:02.839 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:02.840 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:02.840 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:02.840 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:02.840 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:02.840 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:02.840 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:02.840 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:03.100 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:03.100 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:11:03.100 00:11:03.100 --- 10.0.0.3 ping statistics --- 00:11:03.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.100 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:03.100 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:03.100 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:11:03.100 00:11:03.100 --- 10.0.0.4 ping statistics --- 00:11:03.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.100 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:03.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:03.100 00:11:03.100 --- 10.0.0.1 ping statistics --- 00:11:03.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.100 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:03.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:11:03.100 00:11:03.100 --- 10.0.0.2 ping statistics --- 00:11:03.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.100 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=65757 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 65757 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 65757 ']' 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.100 08:48:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:03.100 [2024-09-28 08:48:41.037939] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:03.100 [2024-09-28 08:48:41.038450] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.359 [2024-09-28 08:48:41.218879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.619 [2024-09-28 08:48:41.446183] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.619 [2024-09-28 08:48:41.446268] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.619 [2024-09-28 08:48:41.446294] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.619 [2024-09-28 08:48:41.446315] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.619 [2024-09-28 08:48:41.446332] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.619 [2024-09-28 08:48:41.446380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.878 [2024-09-28 08:48:41.638742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:04.136 08:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.136 08:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:11:04.136 08:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:04.136 08:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:04.136 08:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:04.136 08:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.136 08:48:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:04.395 [2024-09-28 08:48:42.218868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.395 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:04.395 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:04.395 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:04.395 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:04.395 ************************************ 00:11:04.395 START TEST lvs_grow_clean 00:11:04.395 ************************************ 00:11:04.395 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:11:04.395 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:04.395 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:04.395 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:04.395 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:04.395 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:04.395 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:04.395 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:04.395 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:04.395 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:04.653 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:04.653 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:04.912 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=005b6e6b-2da3-4971-abad-3f8077ee3028 00:11:04.912 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 005b6e6b-2da3-4971-abad-3f8077ee3028 00:11:04.912 08:48:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:05.170 08:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:05.170 08:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:05.170 08:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 005b6e6b-2da3-4971-abad-3f8077ee3028 lvol 150 00:11:05.452 08:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6ceecfd7-ee7a-428d-acdb-9cca5da3c99b 00:11:05.452 08:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:05.452 08:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:05.737 [2024-09-28 08:48:43.527108] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:05.737 [2024-09-28 08:48:43.527263] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:05.737 true 00:11:05.737 08:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 005b6e6b-2da3-4971-abad-3f8077ee3028 00:11:05.737 08:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:05.996 08:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:05.996 08:48:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:06.255 08:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6ceecfd7-ee7a-428d-acdb-9cca5da3c99b 00:11:06.514 08:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:11:06.773 [2024-09-28 08:48:44.575893] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:06.773 08:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:07.031 08:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:07.031 08:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65845 00:11:07.032 08:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:07.032 08:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65845 /var/tmp/bdevperf.sock 00:11:07.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:07.032 08:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 65845 ']' 00:11:07.032 08:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:07.032 08:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:07.032 08:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:07.032 08:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:07.032 08:48:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:07.032 [2024-09-28 08:48:44.962424] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:07.032 [2024-09-28 08:48:44.962858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65845 ] 00:11:07.289 [2024-09-28 08:48:45.117352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.289 [2024-09-28 08:48:45.278278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.547 [2024-09-28 08:48:45.431880] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:08.114 08:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.114 08:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:11:08.114 08:48:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:08.373 Nvme0n1 00:11:08.373 08:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:08.633 [ 00:11:08.633 { 00:11:08.633 "name": "Nvme0n1", 00:11:08.633 "aliases": [ 00:11:08.633 "6ceecfd7-ee7a-428d-acdb-9cca5da3c99b" 00:11:08.633 ], 00:11:08.633 "product_name": "NVMe disk", 00:11:08.633 "block_size": 4096, 00:11:08.633 "num_blocks": 38912, 00:11:08.633 "uuid": "6ceecfd7-ee7a-428d-acdb-9cca5da3c99b", 00:11:08.633 "numa_id": -1, 00:11:08.633 "assigned_rate_limits": { 00:11:08.633 "rw_ios_per_sec": 0, 00:11:08.633 "rw_mbytes_per_sec": 0, 00:11:08.633 "r_mbytes_per_sec": 0, 00:11:08.633 "w_mbytes_per_sec": 0 00:11:08.633 }, 00:11:08.633 "claimed": false, 00:11:08.633 "zoned": false, 00:11:08.633 "supported_io_types": { 00:11:08.633 "read": true, 00:11:08.633 "write": true, 00:11:08.633 "unmap": true, 00:11:08.633 "flush": true, 00:11:08.633 "reset": true, 00:11:08.633 "nvme_admin": true, 00:11:08.633 "nvme_io": true, 00:11:08.633 "nvme_io_md": false, 00:11:08.633 "write_zeroes": true, 00:11:08.633 "zcopy": false, 00:11:08.633 "get_zone_info": false, 00:11:08.633 "zone_management": false, 00:11:08.633 "zone_append": false, 00:11:08.633 "compare": true, 00:11:08.633 "compare_and_write": true, 00:11:08.633 "abort": true, 00:11:08.633 "seek_hole": false, 00:11:08.633 "seek_data": false, 00:11:08.633 "copy": true, 00:11:08.633 "nvme_iov_md": false 00:11:08.633 }, 00:11:08.633 "memory_domains": [ 00:11:08.633 { 00:11:08.633 "dma_device_id": "system", 00:11:08.633 "dma_device_type": 1 00:11:08.633 } 00:11:08.633 ], 00:11:08.633 "driver_specific": { 00:11:08.633 "nvme": [ 00:11:08.633 { 00:11:08.633 "trid": { 00:11:08.633 "trtype": "TCP", 00:11:08.633 "adrfam": "IPv4", 00:11:08.633 "traddr": "10.0.0.3", 00:11:08.633 "trsvcid": "4420", 00:11:08.633 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:08.633 }, 00:11:08.633 "ctrlr_data": { 00:11:08.633 "cntlid": 1, 00:11:08.633 "vendor_id": "0x8086", 00:11:08.633 "model_number": "SPDK bdev Controller", 00:11:08.633 "serial_number": "SPDK0", 00:11:08.633 "firmware_revision": "25.01", 00:11:08.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:08.633 "oacs": { 00:11:08.633 "security": 0, 00:11:08.633 "format": 0, 00:11:08.633 "firmware": 0, 00:11:08.633 "ns_manage": 0 00:11:08.633 }, 00:11:08.633 "multi_ctrlr": true, 00:11:08.633 "ana_reporting": false 00:11:08.633 }, 00:11:08.633 "vs": { 00:11:08.633 "nvme_version": "1.3" 00:11:08.633 }, 00:11:08.633 "ns_data": { 00:11:08.633 "id": 1, 00:11:08.633 "can_share": true 00:11:08.633 } 00:11:08.633 } 00:11:08.633 ], 00:11:08.633 "mp_policy": "active_passive" 00:11:08.633 } 00:11:08.633 } 00:11:08.633 ] 00:11:08.633 08:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65863 00:11:08.633 08:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:08.633 08:48:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:08.633 Running I/O for 10 seconds... 00:11:09.570 Latency(us) 00:11:09.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:09.570 Nvme0n1 : 1.00 5588.00 21.83 0.00 0.00 0.00 0.00 0.00 00:11:09.570 =================================================================================================================== 00:11:09.570 Total : 5588.00 21.83 0.00 0.00 0.00 0.00 0.00 00:11:09.570 00:11:10.507 08:48:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 005b6e6b-2da3-4971-abad-3f8077ee3028 00:11:10.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:10.766 Nvme0n1 : 2.00 5588.00 21.83 0.00 0.00 0.00 0.00 0.00 00:11:10.766 =================================================================================================================== 00:11:10.766 Total : 5588.00 21.83 0.00 0.00 0.00 0.00 0.00 00:11:10.766 00:11:11.025 true 00:11:11.025 08:48:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 005b6e6b-2da3-4971-abad-3f8077ee3028 00:11:11.025 08:48:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:11.284 08:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:11.284 08:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:11.284 08:48:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65863 00:11:11.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:11.852 Nvme0n1 : 3.00 5545.67 21.66 0.00 0.00 0.00 0.00 0.00 00:11:11.852 =================================================================================================================== 00:11:11.852 Total : 5545.67 21.66 0.00 0.00 0.00 0.00 0.00 00:11:11.852 00:11:12.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:12.789 Nvme0n1 : 4.00 5492.75 21.46 0.00 0.00 0.00 0.00 0.00 00:11:12.789 =================================================================================================================== 00:11:12.789 Total : 5492.75 21.46 0.00 0.00 0.00 0.00 0.00 00:11:12.789 00:11:13.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.725 Nvme0n1 : 5.00 5486.40 21.43 0.00 0.00 0.00 0.00 0.00 00:11:13.725 =================================================================================================================== 00:11:13.725 Total : 5486.40 21.43 0.00 0.00 0.00 0.00 0.00 00:11:13.725 00:11:14.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:14.663 Nvme0n1 : 6.00 5482.17 21.41 0.00 0.00 0.00 0.00 0.00 00:11:14.663 =================================================================================================================== 00:11:14.663 Total : 5482.17 21.41 0.00 0.00 0.00 0.00 0.00 00:11:14.663 00:11:15.630 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:15.630 Nvme0n1 : 7.00 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:11:15.630 =================================================================================================================== 00:11:15.630 Total : 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:11:15.630 00:11:16.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:16.567 Nvme0n1 : 8.00 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:11:16.567 =================================================================================================================== 00:11:16.567 Total : 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:11:16.567 00:11:17.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.945 Nvme0n1 : 9.00 5446.89 21.28 0.00 0.00 0.00 0.00 0.00 00:11:17.945 =================================================================================================================== 00:11:17.945 Total : 5446.89 21.28 0.00 0.00 0.00 0.00 0.00 00:11:17.945 00:11:18.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.880 Nvme0n1 : 10.00 5435.60 21.23 0.00 0.00 0.00 0.00 0.00 00:11:18.880 =================================================================================================================== 00:11:18.880 Total : 5435.60 21.23 0.00 0.00 0.00 0.00 0.00 00:11:18.880 00:11:18.880 00:11:18.880 Latency(us) 00:11:18.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:18.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.880 Nvme0n1 : 10.02 5438.67 21.24 0.00 0.00 23529.66 14596.65 65774.31 00:11:18.880 =================================================================================================================== 00:11:18.880 Total : 5438.67 21.24 0.00 0.00 23529.66 14596.65 65774.31 00:11:18.880 { 00:11:18.880 "results": [ 00:11:18.880 { 00:11:18.881 "job": "Nvme0n1", 00:11:18.881 "core_mask": "0x2", 00:11:18.881 "workload": "randwrite", 00:11:18.881 "status": "finished", 00:11:18.881 "queue_depth": 128, 00:11:18.881 "io_size": 4096, 00:11:18.881 "runtime": 10.017888, 00:11:18.881 "iops": 5438.671304770028, 00:11:18.881 "mibps": 21.24480978425792, 00:11:18.881 "io_failed": 0, 00:11:18.881 "io_timeout": 0, 00:11:18.881 "avg_latency_us": 23529.657361694175, 00:11:18.881 "min_latency_us": 14596.654545454545, 00:11:18.881 "max_latency_us": 65774.31272727273 00:11:18.881 } 00:11:18.881 ], 00:11:18.881 "core_count": 1 00:11:18.881 } 00:11:18.881 08:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65845 00:11:18.881 08:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 65845 ']' 00:11:18.881 08:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 65845 00:11:18.881 08:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:11:18.881 08:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.881 08:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65845 00:11:18.881 killing process with pid 65845 00:11:18.881 Received shutdown signal, test time was about 10.000000 seconds 00:11:18.881 00:11:18.881 Latency(us) 00:11:18.881 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:18.881 =================================================================================================================== 00:11:18.881 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:18.881 08:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:18.881 08:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:18.881 08:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65845' 00:11:18.881 08:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 65845 00:11:18.881 08:48:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 65845 00:11:19.818 08:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:20.076 08:48:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:20.335 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:20.335 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 005b6e6b-2da3-4971-abad-3f8077ee3028 00:11:20.593 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:20.594 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:20.594 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:20.852 [2024-09-28 08:48:58.649627] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:20.852 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 005b6e6b-2da3-4971-abad-3f8077ee3028 00:11:20.852 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:11:20.852 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 005b6e6b-2da3-4971-abad-3f8077ee3028 00:11:20.852 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.852 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:20.852 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.852 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:20.852 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.852 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:20.852 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:20.852 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:20.852 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 005b6e6b-2da3-4971-abad-3f8077ee3028 00:11:21.111 request: 00:11:21.111 { 00:11:21.111 "uuid": "005b6e6b-2da3-4971-abad-3f8077ee3028", 00:11:21.111 "method": "bdev_lvol_get_lvstores", 00:11:21.111 "req_id": 1 00:11:21.111 } 00:11:21.111 Got JSON-RPC error response 00:11:21.111 response: 00:11:21.111 { 00:11:21.111 "code": -19, 00:11:21.111 "message": "No such device" 00:11:21.111 } 00:11:21.111 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:11:21.111 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:21.111 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:21.111 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:21.111 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:21.370 aio_bdev 00:11:21.370 08:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6ceecfd7-ee7a-428d-acdb-9cca5da3c99b 00:11:21.370 08:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=6ceecfd7-ee7a-428d-acdb-9cca5da3c99b 00:11:21.370 08:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:21.370 08:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:11:21.370 08:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:21.370 08:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:21.370 08:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:21.628 08:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ceecfd7-ee7a-428d-acdb-9cca5da3c99b -t 2000 00:11:21.887 [ 00:11:21.887 { 00:11:21.887 "name": "6ceecfd7-ee7a-428d-acdb-9cca5da3c99b", 00:11:21.887 "aliases": [ 00:11:21.887 "lvs/lvol" 00:11:21.887 ], 00:11:21.887 "product_name": "Logical Volume", 00:11:21.887 "block_size": 4096, 00:11:21.887 "num_blocks": 38912, 00:11:21.887 "uuid": "6ceecfd7-ee7a-428d-acdb-9cca5da3c99b", 00:11:21.887 "assigned_rate_limits": { 00:11:21.887 "rw_ios_per_sec": 0, 00:11:21.887 "rw_mbytes_per_sec": 0, 00:11:21.887 "r_mbytes_per_sec": 0, 00:11:21.887 "w_mbytes_per_sec": 0 00:11:21.887 }, 00:11:21.887 "claimed": false, 00:11:21.887 "zoned": false, 00:11:21.887 "supported_io_types": { 00:11:21.887 "read": true, 00:11:21.887 "write": true, 00:11:21.887 "unmap": true, 00:11:21.887 "flush": false, 00:11:21.887 "reset": true, 00:11:21.887 "nvme_admin": false, 00:11:21.887 "nvme_io": false, 00:11:21.887 "nvme_io_md": false, 00:11:21.887 "write_zeroes": true, 00:11:21.887 "zcopy": false, 00:11:21.887 "get_zone_info": false, 00:11:21.887 "zone_management": false, 00:11:21.887 "zone_append": false, 00:11:21.887 "compare": false, 00:11:21.887 "compare_and_write": false, 00:11:21.887 "abort": false, 00:11:21.887 "seek_hole": true, 00:11:21.887 "seek_data": true, 00:11:21.887 "copy": false, 00:11:21.887 "nvme_iov_md": false 00:11:21.887 }, 00:11:21.887 "driver_specific": { 00:11:21.887 "lvol": { 00:11:21.887 "lvol_store_uuid": "005b6e6b-2da3-4971-abad-3f8077ee3028", 00:11:21.887 "base_bdev": "aio_bdev", 00:11:21.887 "thin_provision": false, 00:11:21.887 "num_allocated_clusters": 38, 00:11:21.887 "snapshot": false, 00:11:21.887 "clone": false, 00:11:21.887 "esnap_clone": false 00:11:21.887 } 00:11:21.887 } 00:11:21.887 } 00:11:21.887 ] 00:11:21.887 08:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:11:21.887 08:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 005b6e6b-2da3-4971-abad-3f8077ee3028 00:11:21.887 08:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:22.146 08:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:22.146 08:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 005b6e6b-2da3-4971-abad-3f8077ee3028 00:11:22.146 08:48:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:22.405 08:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:22.405 08:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6ceecfd7-ee7a-428d-acdb-9cca5da3c99b 00:11:22.663 08:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 005b6e6b-2da3-4971-abad-3f8077ee3028 00:11:22.922 08:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:23.180 08:49:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:23.439 ************************************ 00:11:23.439 END TEST lvs_grow_clean 00:11:23.439 ************************************ 00:11:23.439 00:11:23.439 real 0m19.067s 00:11:23.439 user 0m18.409s 00:11:23.439 sys 0m2.250s 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:23.439 ************************************ 00:11:23.439 START TEST lvs_grow_dirty 00:11:23.439 ************************************ 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:23.439 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:23.698 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:23.698 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:24.267 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=bc7fe0fb-5481-4544-8318-aa16a6cb3318 00:11:24.267 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc7fe0fb-5481-4544-8318-aa16a6cb3318 00:11:24.267 08:49:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:24.267 08:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:24.267 08:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:24.267 08:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bc7fe0fb-5481-4544-8318-aa16a6cb3318 lvol 150 00:11:24.527 08:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c76b27ec-18ba-44c4-95ca-078a4c4a2dde 00:11:24.527 08:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:24.527 08:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:24.786 [2024-09-28 08:49:02.687810] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:24.786 [2024-09-28 08:49:02.687945] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:24.786 true 00:11:24.786 08:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:24.786 08:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc7fe0fb-5481-4544-8318-aa16a6cb3318 00:11:25.045 08:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:25.045 08:49:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:25.304 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c76b27ec-18ba-44c4-95ca-078a4c4a2dde 00:11:25.563 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:11:25.822 [2024-09-28 08:49:03.752508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:25.822 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:26.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:26.081 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66116 00:11:26.081 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:26.081 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:26.081 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66116 /var/tmp/bdevperf.sock 00:11:26.081 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 66116 ']' 00:11:26.081 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:26.081 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.081 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:26.081 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.081 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:26.399 [2024-09-28 08:49:04.165791] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:26.399 [2024-09-28 08:49:04.166263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66116 ] 00:11:26.399 [2024-09-28 08:49:04.336613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.658 [2024-09-28 08:49:04.562871] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.918 [2024-09-28 08:49:04.722234] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:27.177 08:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.177 08:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:27.177 08:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:27.436 Nvme0n1 00:11:27.436 08:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:27.695 [ 00:11:27.695 { 00:11:27.695 "name": "Nvme0n1", 00:11:27.695 "aliases": [ 00:11:27.695 "c76b27ec-18ba-44c4-95ca-078a4c4a2dde" 00:11:27.695 ], 00:11:27.695 "product_name": "NVMe disk", 00:11:27.695 "block_size": 4096, 00:11:27.695 "num_blocks": 38912, 00:11:27.695 "uuid": "c76b27ec-18ba-44c4-95ca-078a4c4a2dde", 00:11:27.695 "numa_id": -1, 00:11:27.695 "assigned_rate_limits": { 00:11:27.695 "rw_ios_per_sec": 0, 00:11:27.695 "rw_mbytes_per_sec": 0, 00:11:27.695 "r_mbytes_per_sec": 0, 00:11:27.695 "w_mbytes_per_sec": 0 00:11:27.695 }, 00:11:27.695 "claimed": false, 00:11:27.695 "zoned": false, 00:11:27.695 "supported_io_types": { 00:11:27.695 "read": true, 00:11:27.695 "write": true, 00:11:27.695 "unmap": true, 00:11:27.695 "flush": true, 00:11:27.695 "reset": true, 00:11:27.695 "nvme_admin": true, 00:11:27.695 "nvme_io": true, 00:11:27.695 "nvme_io_md": false, 00:11:27.695 "write_zeroes": true, 00:11:27.695 "zcopy": false, 00:11:27.695 "get_zone_info": false, 00:11:27.695 "zone_management": false, 00:11:27.695 "zone_append": false, 00:11:27.695 "compare": true, 00:11:27.695 "compare_and_write": true, 00:11:27.695 "abort": true, 00:11:27.695 "seek_hole": false, 00:11:27.695 "seek_data": false, 00:11:27.695 "copy": true, 00:11:27.695 "nvme_iov_md": false 00:11:27.695 }, 00:11:27.695 "memory_domains": [ 00:11:27.695 { 00:11:27.695 "dma_device_id": "system", 00:11:27.695 "dma_device_type": 1 00:11:27.695 } 00:11:27.695 ], 00:11:27.695 "driver_specific": { 00:11:27.695 "nvme": [ 00:11:27.695 { 00:11:27.695 "trid": { 00:11:27.695 "trtype": "TCP", 00:11:27.695 "adrfam": "IPv4", 00:11:27.695 "traddr": "10.0.0.3", 00:11:27.695 "trsvcid": "4420", 00:11:27.695 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:27.695 }, 00:11:27.695 "ctrlr_data": { 00:11:27.695 "cntlid": 1, 00:11:27.695 "vendor_id": "0x8086", 00:11:27.695 "model_number": "SPDK bdev Controller", 00:11:27.695 "serial_number": "SPDK0", 00:11:27.695 "firmware_revision": "25.01", 00:11:27.695 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:27.695 "oacs": { 00:11:27.695 "security": 0, 00:11:27.695 "format": 0, 00:11:27.695 "firmware": 0, 00:11:27.695 "ns_manage": 0 00:11:27.695 }, 00:11:27.695 "multi_ctrlr": true, 00:11:27.695 "ana_reporting": false 00:11:27.695 }, 00:11:27.695 "vs": { 00:11:27.695 "nvme_version": "1.3" 00:11:27.695 }, 00:11:27.695 "ns_data": { 00:11:27.695 "id": 1, 00:11:27.695 "can_share": true 00:11:27.695 } 00:11:27.695 } 00:11:27.695 ], 00:11:27.695 "mp_policy": "active_passive" 00:11:27.695 } 00:11:27.695 } 00:11:27.695 ] 00:11:27.695 08:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66144 00:11:27.695 08:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:27.695 08:49:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:27.955 Running I/O for 10 seconds... 00:11:28.892 Latency(us) 00:11:28.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:28.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:28.892 Nvme0n1 : 1.00 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:11:28.892 =================================================================================================================== 00:11:28.892 Total : 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:11:28.892 00:11:29.830 08:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bc7fe0fb-5481-4544-8318-aa16a6cb3318 00:11:29.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:29.830 Nvme0n1 : 2.00 5397.50 21.08 0.00 0.00 0.00 0.00 0.00 00:11:29.830 =================================================================================================================== 00:11:29.830 Total : 5397.50 21.08 0.00 0.00 0.00 0.00 0.00 00:11:29.830 00:11:30.088 true 00:11:30.088 08:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc7fe0fb-5481-4544-8318-aa16a6cb3318 00:11:30.088 08:49:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:30.347 08:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:30.347 08:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:30.347 08:49:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66144 00:11:30.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:30.914 Nvme0n1 : 3.00 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:11:30.914 =================================================================================================================== 00:11:30.914 Total : 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:11:30.914 00:11:31.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:31.851 Nvme0n1 : 4.00 5492.75 21.46 0.00 0.00 0.00 0.00 0.00 00:11:31.851 =================================================================================================================== 00:11:31.851 Total : 5492.75 21.46 0.00 0.00 0.00 0.00 0.00 00:11:31.851 00:11:32.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:32.787 Nvme0n1 : 5.00 5433.40 21.22 0.00 0.00 0.00 0.00 0.00 00:11:32.788 =================================================================================================================== 00:11:32.788 Total : 5433.40 21.22 0.00 0.00 0.00 0.00 0.00 00:11:32.788 00:11:34.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:34.200 Nvme0n1 : 6.00 5438.00 21.24 0.00 0.00 0.00 0.00 0.00 00:11:34.200 =================================================================================================================== 00:11:34.200 Total : 5438.00 21.24 0.00 0.00 0.00 0.00 0.00 00:11:34.200 00:11:35.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:35.136 Nvme0n1 : 7.00 5441.29 21.26 0.00 0.00 0.00 0.00 0.00 00:11:35.137 =================================================================================================================== 00:11:35.137 Total : 5441.29 21.26 0.00 0.00 0.00 0.00 0.00 00:11:35.137 00:11:36.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:36.073 Nvme0n1 : 8.00 5443.75 21.26 0.00 0.00 0.00 0.00 0.00 00:11:36.073 =================================================================================================================== 00:11:36.073 Total : 5443.75 21.26 0.00 0.00 0.00 0.00 0.00 00:11:36.073 00:11:37.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.009 Nvme0n1 : 9.00 5445.67 21.27 0.00 0.00 0.00 0.00 0.00 00:11:37.009 =================================================================================================================== 00:11:37.010 Total : 5445.67 21.27 0.00 0.00 0.00 0.00 0.00 00:11:37.010 00:11:37.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.947 Nvme0n1 : 10.00 5447.20 21.28 0.00 0.00 0.00 0.00 0.00 00:11:37.947 =================================================================================================================== 00:11:37.947 Total : 5447.20 21.28 0.00 0.00 0.00 0.00 0.00 00:11:37.947 00:11:37.947 00:11:37.947 Latency(us) 00:11:37.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:37.947 Nvme0n1 : 10.01 5455.12 21.31 0.00 0.00 23457.90 4110.89 80073.08 00:11:37.947 =================================================================================================================== 00:11:37.947 Total : 5455.12 21.31 0.00 0.00 23457.90 4110.89 80073.08 00:11:37.947 { 00:11:37.947 "results": [ 00:11:37.947 { 00:11:37.947 "job": "Nvme0n1", 00:11:37.947 "core_mask": "0x2", 00:11:37.947 "workload": "randwrite", 00:11:37.947 "status": "finished", 00:11:37.947 "queue_depth": 128, 00:11:37.947 "io_size": 4096, 00:11:37.947 "runtime": 10.008953, 00:11:37.947 "iops": 5455.11603461421, 00:11:37.947 "mibps": 21.309047010211756, 00:11:37.947 "io_failed": 0, 00:11:37.947 "io_timeout": 0, 00:11:37.947 "avg_latency_us": 23457.90057196137, 00:11:37.947 "min_latency_us": 4110.894545454546, 00:11:37.947 "max_latency_us": 80073.07636363637 00:11:37.947 } 00:11:37.947 ], 00:11:37.947 "core_count": 1 00:11:37.947 } 00:11:37.947 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66116 00:11:37.947 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 66116 ']' 00:11:37.947 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 66116 00:11:37.947 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:11:37.947 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:37.947 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66116 00:11:37.947 killing process with pid 66116 00:11:37.947 Received shutdown signal, test time was about 10.000000 seconds 00:11:37.947 00:11:37.947 Latency(us) 00:11:37.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.947 =================================================================================================================== 00:11:37.947 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:37.947 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:37.947 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:37.947 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66116' 00:11:37.947 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 66116 00:11:37.947 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 66116 00:11:38.883 08:49:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:39.451 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:39.451 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:39.451 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc7fe0fb-5481-4544-8318-aa16a6cb3318 00:11:39.710 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:39.710 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:39.710 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65757 00:11:39.710 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65757 00:11:39.969 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65757 Killed "${NVMF_APP[@]}" "$@" 00:11:39.969 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:39.969 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:39.969 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:39.969 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:39.969 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:39.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.969 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=66286 00:11:39.969 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 66286 00:11:39.969 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 66286 ']' 00:11:39.969 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:39.969 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.969 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:39.969 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.969 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:39.969 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:39.969 [2024-09-28 08:49:17.843687] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:39.969 [2024-09-28 08:49:17.843848] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.228 [2024-09-28 08:49:18.012387] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.228 [2024-09-28 08:49:18.168235] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.228 [2024-09-28 08:49:18.168304] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.228 [2024-09-28 08:49:18.168338] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.228 [2024-09-28 08:49:18.168354] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.228 [2024-09-28 08:49:18.168365] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.228 [2024-09-28 08:49:18.168402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.487 [2024-09-28 08:49:18.316774] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:41.055 08:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:41.055 08:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:41.055 08:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:41.055 08:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:41.055 08:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:41.055 08:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.055 08:49:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:41.314 [2024-09-28 08:49:19.091250] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:41.314 [2024-09-28 08:49:19.091609] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:41.314 [2024-09-28 08:49:19.092038] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:41.314 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:41.314 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c76b27ec-18ba-44c4-95ca-078a4c4a2dde 00:11:41.314 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c76b27ec-18ba-44c4-95ca-078a4c4a2dde 00:11:41.314 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:41.314 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:41.314 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:41.314 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:41.314 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:41.573 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c76b27ec-18ba-44c4-95ca-078a4c4a2dde -t 2000 00:11:41.833 [ 00:11:41.833 { 00:11:41.833 "name": "c76b27ec-18ba-44c4-95ca-078a4c4a2dde", 00:11:41.833 "aliases": [ 00:11:41.833 "lvs/lvol" 00:11:41.833 ], 00:11:41.833 "product_name": "Logical Volume", 00:11:41.833 "block_size": 4096, 00:11:41.833 "num_blocks": 38912, 00:11:41.833 "uuid": "c76b27ec-18ba-44c4-95ca-078a4c4a2dde", 00:11:41.833 "assigned_rate_limits": { 00:11:41.833 "rw_ios_per_sec": 0, 00:11:41.833 "rw_mbytes_per_sec": 0, 00:11:41.833 "r_mbytes_per_sec": 0, 00:11:41.833 "w_mbytes_per_sec": 0 00:11:41.833 }, 00:11:41.833 "claimed": false, 00:11:41.833 "zoned": false, 00:11:41.833 "supported_io_types": { 00:11:41.833 "read": true, 00:11:41.833 "write": true, 00:11:41.833 "unmap": true, 00:11:41.833 "flush": false, 00:11:41.833 "reset": true, 00:11:41.833 "nvme_admin": false, 00:11:41.833 "nvme_io": false, 00:11:41.833 "nvme_io_md": false, 00:11:41.833 "write_zeroes": true, 00:11:41.833 "zcopy": false, 00:11:41.833 "get_zone_info": false, 00:11:41.833 "zone_management": false, 00:11:41.833 "zone_append": false, 00:11:41.833 "compare": false, 00:11:41.833 "compare_and_write": false, 00:11:41.833 "abort": false, 00:11:41.833 "seek_hole": true, 00:11:41.833 "seek_data": true, 00:11:41.833 "copy": false, 00:11:41.833 "nvme_iov_md": false 00:11:41.833 }, 00:11:41.833 "driver_specific": { 00:11:41.833 "lvol": { 00:11:41.833 "lvol_store_uuid": "bc7fe0fb-5481-4544-8318-aa16a6cb3318", 00:11:41.833 "base_bdev": "aio_bdev", 00:11:41.833 "thin_provision": false, 00:11:41.833 "num_allocated_clusters": 38, 00:11:41.833 "snapshot": false, 00:11:41.833 "clone": false, 00:11:41.833 "esnap_clone": false 00:11:41.833 } 00:11:41.833 } 00:11:41.833 } 00:11:41.833 ] 00:11:41.833 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:41.833 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc7fe0fb-5481-4544-8318-aa16a6cb3318 00:11:41.833 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:42.092 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:42.092 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc7fe0fb-5481-4544-8318-aa16a6cb3318 00:11:42.092 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:42.351 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:42.351 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:42.610 [2024-09-28 08:49:20.376533] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:42.610 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc7fe0fb-5481-4544-8318-aa16a6cb3318 00:11:42.610 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:42.610 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc7fe0fb-5481-4544-8318-aa16a6cb3318 00:11:42.610 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.610 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.610 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.610 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.610 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.610 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:42.610 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:42.610 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:42.610 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc7fe0fb-5481-4544-8318-aa16a6cb3318 00:11:42.869 request: 00:11:42.869 { 00:11:42.869 "uuid": "bc7fe0fb-5481-4544-8318-aa16a6cb3318", 00:11:42.869 "method": "bdev_lvol_get_lvstores", 00:11:42.869 "req_id": 1 00:11:42.869 } 00:11:42.869 Got JSON-RPC error response 00:11:42.869 response: 00:11:42.869 { 00:11:42.869 "code": -19, 00:11:42.869 "message": "No such device" 00:11:42.869 } 00:11:42.869 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:42.869 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:42.869 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:42.869 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:42.869 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:43.141 aio_bdev 00:11:43.141 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c76b27ec-18ba-44c4-95ca-078a4c4a2dde 00:11:43.141 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=c76b27ec-18ba-44c4-95ca-078a4c4a2dde 00:11:43.141 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:43.141 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:43.141 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:43.141 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:43.141 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:43.421 08:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c76b27ec-18ba-44c4-95ca-078a4c4a2dde -t 2000 00:11:43.421 [ 00:11:43.421 { 00:11:43.421 "name": "c76b27ec-18ba-44c4-95ca-078a4c4a2dde", 00:11:43.421 "aliases": [ 00:11:43.421 "lvs/lvol" 00:11:43.421 ], 00:11:43.421 "product_name": "Logical Volume", 00:11:43.421 "block_size": 4096, 00:11:43.421 "num_blocks": 38912, 00:11:43.421 "uuid": "c76b27ec-18ba-44c4-95ca-078a4c4a2dde", 00:11:43.421 "assigned_rate_limits": { 00:11:43.421 "rw_ios_per_sec": 0, 00:11:43.421 "rw_mbytes_per_sec": 0, 00:11:43.421 "r_mbytes_per_sec": 0, 00:11:43.421 "w_mbytes_per_sec": 0 00:11:43.421 }, 00:11:43.421 "claimed": false, 00:11:43.421 "zoned": false, 00:11:43.421 "supported_io_types": { 00:11:43.421 "read": true, 00:11:43.421 "write": true, 00:11:43.421 "unmap": true, 00:11:43.421 "flush": false, 00:11:43.421 "reset": true, 00:11:43.421 "nvme_admin": false, 00:11:43.421 "nvme_io": false, 00:11:43.421 "nvme_io_md": false, 00:11:43.421 "write_zeroes": true, 00:11:43.421 "zcopy": false, 00:11:43.421 "get_zone_info": false, 00:11:43.421 "zone_management": false, 00:11:43.421 "zone_append": false, 00:11:43.421 "compare": false, 00:11:43.421 "compare_and_write": false, 00:11:43.421 "abort": false, 00:11:43.421 "seek_hole": true, 00:11:43.421 "seek_data": true, 00:11:43.421 "copy": false, 00:11:43.421 "nvme_iov_md": false 00:11:43.421 }, 00:11:43.421 "driver_specific": { 00:11:43.421 "lvol": { 00:11:43.421 "lvol_store_uuid": "bc7fe0fb-5481-4544-8318-aa16a6cb3318", 00:11:43.421 "base_bdev": "aio_bdev", 00:11:43.421 "thin_provision": false, 00:11:43.421 "num_allocated_clusters": 38, 00:11:43.421 "snapshot": false, 00:11:43.421 "clone": false, 00:11:43.421 "esnap_clone": false 00:11:43.421 } 00:11:43.421 } 00:11:43.421 } 00:11:43.421 ] 00:11:43.421 08:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:43.421 08:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc7fe0fb-5481-4544-8318-aa16a6cb3318 00:11:43.421 08:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:43.989 08:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:43.989 08:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:43.989 08:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bc7fe0fb-5481-4544-8318-aa16a6cb3318 00:11:43.989 08:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:43.989 08:49:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c76b27ec-18ba-44c4-95ca-078a4c4a2dde 00:11:44.246 08:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc7fe0fb-5481-4544-8318-aa16a6cb3318 00:11:44.505 08:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:44.765 08:49:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:45.024 ************************************ 00:11:45.024 END TEST lvs_grow_dirty 00:11:45.024 ************************************ 00:11:45.024 00:11:45.024 real 0m21.632s 00:11:45.024 user 0m45.188s 00:11:45.024 sys 0m9.262s 00:11:45.024 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.024 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:45.283 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:45.283 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:11:45.283 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:11:45.283 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:11:45.283 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:45.283 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:11:45.283 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:11:45.283 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:11:45.283 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:45.283 nvmf_trace.0 00:11:45.283 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:11:45.283 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:45.283 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:45.283 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:45.542 rmmod nvme_tcp 00:11:45.542 rmmod nvme_fabrics 00:11:45.542 rmmod nvme_keyring 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 66286 ']' 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 66286 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 66286 ']' 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 66286 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66286 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:45.542 killing process with pid 66286 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66286' 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 66286 00:11:45.542 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 66286 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:46.480 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:11:46.740 00:11:46.740 real 0m44.308s 00:11:46.740 user 1m10.861s 00:11:46.740 sys 0m12.440s 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:46.740 ************************************ 00:11:46.740 END TEST nvmf_lvs_grow 00:11:46.740 ************************************ 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:46.740 ************************************ 00:11:46.740 START TEST nvmf_bdev_io_wait 00:11:46.740 ************************************ 00:11:46.740 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:47.001 * Looking for test storage... 00:11:47.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:47.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.001 --rc genhtml_branch_coverage=1 00:11:47.001 --rc genhtml_function_coverage=1 00:11:47.001 --rc genhtml_legend=1 00:11:47.001 --rc geninfo_all_blocks=1 00:11:47.001 --rc geninfo_unexecuted_blocks=1 00:11:47.001 00:11:47.001 ' 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:47.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.001 --rc genhtml_branch_coverage=1 00:11:47.001 --rc genhtml_function_coverage=1 00:11:47.001 --rc genhtml_legend=1 00:11:47.001 --rc geninfo_all_blocks=1 00:11:47.001 --rc geninfo_unexecuted_blocks=1 00:11:47.001 00:11:47.001 ' 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:47.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.001 --rc genhtml_branch_coverage=1 00:11:47.001 --rc genhtml_function_coverage=1 00:11:47.001 --rc genhtml_legend=1 00:11:47.001 --rc geninfo_all_blocks=1 00:11:47.001 --rc geninfo_unexecuted_blocks=1 00:11:47.001 00:11:47.001 ' 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:47.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.001 --rc genhtml_branch_coverage=1 00:11:47.001 --rc genhtml_function_coverage=1 00:11:47.001 --rc genhtml_legend=1 00:11:47.001 --rc geninfo_all_blocks=1 00:11:47.001 --rc geninfo_unexecuted_blocks=1 00:11:47.001 00:11:47.001 ' 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.001 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.002 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:47.002 Cannot find device "nvmf_init_br" 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:47.002 Cannot find device "nvmf_init_br2" 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:47.002 Cannot find device "nvmf_tgt_br" 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:47.002 Cannot find device "nvmf_tgt_br2" 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:47.002 Cannot find device "nvmf_init_br" 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:47.002 Cannot find device "nvmf_init_br2" 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:47.002 Cannot find device "nvmf_tgt_br" 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:47.002 Cannot find device "nvmf_tgt_br2" 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:47.002 Cannot find device "nvmf_br" 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:11:47.002 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:47.262 Cannot find device "nvmf_init_if" 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:47.262 Cannot find device "nvmf_init_if2" 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:47.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:47.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:47.262 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:47.262 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:11:47.262 00:11:47.262 --- 10.0.0.3 ping statistics --- 00:11:47.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.262 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:47.262 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:47.262 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:11:47.262 00:11:47.262 --- 10.0.0.4 ping statistics --- 00:11:47.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.262 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:47.262 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:47.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:11:47.262 00:11:47.263 --- 10.0.0.1 ping statistics --- 00:11:47.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.263 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:47.263 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:47.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:11:47.263 00:11:47.263 --- 10.0.0.2 ping statistics --- 00:11:47.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.263 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:11:47.263 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.263 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:11:47.263 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:47.263 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.263 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:47.263 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:47.263 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.263 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:47.263 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:47.522 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:47.522 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:47.522 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:47.522 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:47.522 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=66665 00:11:47.522 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 66665 00:11:47.522 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:47.522 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 66665 ']' 00:11:47.522 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.522 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:47.522 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.522 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:47.522 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:47.522 [2024-09-28 08:49:25.383548] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:47.522 [2024-09-28 08:49:25.383728] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.781 [2024-09-28 08:49:25.558085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.781 [2024-09-28 08:49:25.728710] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.781 [2024-09-28 08:49:25.728823] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.781 [2024-09-28 08:49:25.728858] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.781 [2024-09-28 08:49:25.728879] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.781 [2024-09-28 08:49:25.728901] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.781 [2024-09-28 08:49:25.729129] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.781 [2024-09-28 08:49:25.729782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.781 [2024-09-28 08:49:25.729878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.781 [2024-09-28 08:49:25.729899] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.719 [2024-09-28 08:49:26.632472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.719 [2024-09-28 08:49:26.653753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.719 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.977 Malloc0 00:11:48.977 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.977 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:48.977 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.977 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.977 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.977 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:48.977 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.977 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:48.978 [2024-09-28 08:49:26.761634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66711 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66713 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:48.978 { 00:11:48.978 "params": { 00:11:48.978 "name": "Nvme$subsystem", 00:11:48.978 "trtype": "$TEST_TRANSPORT", 00:11:48.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.978 "adrfam": "ipv4", 00:11:48.978 "trsvcid": "$NVMF_PORT", 00:11:48.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.978 "hdgst": ${hdgst:-false}, 00:11:48.978 "ddgst": ${ddgst:-false} 00:11:48.978 }, 00:11:48.978 "method": "bdev_nvme_attach_controller" 00:11:48.978 } 00:11:48.978 EOF 00:11:48.978 )") 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:48.978 { 00:11:48.978 "params": { 00:11:48.978 "name": "Nvme$subsystem", 00:11:48.978 "trtype": "$TEST_TRANSPORT", 00:11:48.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.978 "adrfam": "ipv4", 00:11:48.978 "trsvcid": "$NVMF_PORT", 00:11:48.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.978 "hdgst": ${hdgst:-false}, 00:11:48.978 "ddgst": ${ddgst:-false} 00:11:48.978 }, 00:11:48.978 "method": "bdev_nvme_attach_controller" 00:11:48.978 } 00:11:48.978 EOF 00:11:48.978 )") 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66715 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66718 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:48.978 { 00:11:48.978 "params": { 00:11:48.978 "name": "Nvme$subsystem", 00:11:48.978 "trtype": "$TEST_TRANSPORT", 00:11:48.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.978 "adrfam": "ipv4", 00:11:48.978 "trsvcid": "$NVMF_PORT", 00:11:48.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.978 "hdgst": ${hdgst:-false}, 00:11:48.978 "ddgst": ${ddgst:-false} 00:11:48.978 }, 00:11:48.978 "method": "bdev_nvme_attach_controller" 00:11:48.978 } 00:11:48.978 EOF 00:11:48.978 )") 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:48.978 "params": { 00:11:48.978 "name": "Nvme1", 00:11:48.978 "trtype": "tcp", 00:11:48.978 "traddr": "10.0.0.3", 00:11:48.978 "adrfam": "ipv4", 00:11:48.978 "trsvcid": "4420", 00:11:48.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.978 "hdgst": false, 00:11:48.978 "ddgst": false 00:11:48.978 }, 00:11:48.978 "method": "bdev_nvme_attach_controller" 00:11:48.978 }' 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:48.978 "params": { 00:11:48.978 "name": "Nvme1", 00:11:48.978 "trtype": "tcp", 00:11:48.978 "traddr": "10.0.0.3", 00:11:48.978 "adrfam": "ipv4", 00:11:48.978 "trsvcid": "4420", 00:11:48.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.978 "hdgst": false, 00:11:48.978 "ddgst": false 00:11:48.978 }, 00:11:48.978 "method": "bdev_nvme_attach_controller" 00:11:48.978 }' 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:48.978 "params": { 00:11:48.978 "name": "Nvme1", 00:11:48.978 "trtype": "tcp", 00:11:48.978 "traddr": "10.0.0.3", 00:11:48.978 "adrfam": "ipv4", 00:11:48.978 "trsvcid": "4420", 00:11:48.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.978 "hdgst": false, 00:11:48.978 "ddgst": false 00:11:48.978 }, 00:11:48.978 "method": "bdev_nvme_attach_controller" 00:11:48.978 }' 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:48.978 { 00:11:48.978 "params": { 00:11:48.978 "name": "Nvme$subsystem", 00:11:48.978 "trtype": "$TEST_TRANSPORT", 00:11:48.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:48.978 "adrfam": "ipv4", 00:11:48.978 "trsvcid": "$NVMF_PORT", 00:11:48.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:48.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:48.978 "hdgst": ${hdgst:-false}, 00:11:48.978 "ddgst": ${ddgst:-false} 00:11:48.978 }, 00:11:48.978 "method": "bdev_nvme_attach_controller" 00:11:48.978 } 00:11:48.978 EOF 00:11:48.978 )") 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:48.978 "params": { 00:11:48.978 "name": "Nvme1", 00:11:48.978 "trtype": "tcp", 00:11:48.978 "traddr": "10.0.0.3", 00:11:48.978 "adrfam": "ipv4", 00:11:48.978 "trsvcid": "4420", 00:11:48.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.978 "hdgst": false, 00:11:48.978 "ddgst": false 00:11:48.978 }, 00:11:48.978 "method": "bdev_nvme_attach_controller" 00:11:48.978 }' 00:11:48.978 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66711 00:11:48.978 [2024-09-28 08:49:26.880182] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:48.978 [2024-09-28 08:49:26.880184] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:48.978 [2024-09-28 08:49:26.880348] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:48.978 [2024-09-28 08:49:26.880677] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:48.978 [2024-09-28 08:49:26.914327] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:48.978 [2024-09-28 08:49:26.914736] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:48.978 [2024-09-28 08:49:26.951447] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:48.978 [2024-09-28 08:49:26.951863] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:49.236 [2024-09-28 08:49:27.095984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.236 [2024-09-28 08:49:27.140672] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.236 [2024-09-28 08:49:27.186462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.495 [2024-09-28 08:49:27.234744] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.495 [2024-09-28 08:49:27.300205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:49.495 [2024-09-28 08:49:27.352663] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:11:49.495 [2024-09-28 08:49:27.401294] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:49.495 [2024-09-28 08:49:27.437716] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:49.495 [2024-09-28 08:49:27.482316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:49.753 [2024-09-28 08:49:27.535499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:49.753 [2024-09-28 08:49:27.588953] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:49.753 [2024-09-28 08:49:27.628308] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:49.753 Running I/O for 1 seconds... 00:11:49.753 Running I/O for 1 seconds... 00:11:50.012 Running I/O for 1 seconds... 00:11:50.012 Running I/O for 1 seconds... 00:11:50.947 7934.00 IOPS, 30.99 MiB/s 00:11:50.947 Latency(us) 00:11:50.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.947 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:50.947 Nvme1n1 : 1.01 7975.01 31.15 0.00 0.00 15957.46 7745.16 20971.52 00:11:50.947 =================================================================================================================== 00:11:50.947 Total : 7975.01 31.15 0.00 0.00 15957.46 7745.16 20971.52 00:11:50.947 7543.00 IOPS, 29.46 MiB/s 00:11:50.947 Latency(us) 00:11:50.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.947 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:50.947 Nvme1n1 : 1.01 7608.84 29.72 0.00 0.00 16733.99 8936.73 27644.28 00:11:50.947 =================================================================================================================== 00:11:50.947 Total : 7608.84 29.72 0.00 0.00 16733.99 8936.73 27644.28 00:11:50.947 6747.00 IOPS, 26.36 MiB/s 00:11:50.947 Latency(us) 00:11:50.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.947 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:50.947 Nvme1n1 : 1.01 6811.28 26.61 0.00 0.00 18679.81 2815.07 27882.59 00:11:50.947 =================================================================================================================== 00:11:50.947 Total : 6811.28 26.61 0.00 0.00 18679.81 2815.07 27882.59 00:11:50.947 143664.00 IOPS, 561.19 MiB/s 00:11:50.947 Latency(us) 00:11:50.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.947 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:50.947 Nvme1n1 : 1.00 143281.84 559.69 0.00 0.00 888.42 513.86 2621.44 00:11:50.947 =================================================================================================================== 00:11:50.947 Total : 143281.84 559.69 0.00 0.00 888.42 513.86 2621.44 00:11:51.882 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66713 00:11:51.882 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66715 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66718 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.141 rmmod nvme_tcp 00:11:52.141 rmmod nvme_fabrics 00:11:52.141 rmmod nvme_keyring 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 66665 ']' 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 66665 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 66665 ']' 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 66665 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:52.141 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66665 00:11:52.141 08:49:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:52.141 08:49:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:52.141 killing process with pid 66665 00:11:52.141 08:49:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66665' 00:11:52.141 08:49:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 66665 00:11:52.141 08:49:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 66665 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:53.131 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:53.390 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:53.390 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:53.390 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:53.390 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:53.390 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:53.390 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.390 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.390 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.390 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:11:53.390 00:11:53.390 real 0m6.582s 00:11:53.390 user 0m29.445s 00:11:53.390 sys 0m2.791s 00:11:53.390 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.390 ************************************ 00:11:53.390 END TEST nvmf_bdev_io_wait 00:11:53.390 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:53.390 ************************************ 00:11:53.390 08:49:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:53.390 08:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:53.390 08:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.391 08:49:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:53.391 ************************************ 00:11:53.391 START TEST nvmf_queue_depth 00:11:53.391 ************************************ 00:11:53.391 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:53.651 * Looking for test storage... 00:11:53.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:53.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.651 --rc genhtml_branch_coverage=1 00:11:53.651 --rc genhtml_function_coverage=1 00:11:53.651 --rc genhtml_legend=1 00:11:53.651 --rc geninfo_all_blocks=1 00:11:53.651 --rc geninfo_unexecuted_blocks=1 00:11:53.651 00:11:53.651 ' 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:53.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.651 --rc genhtml_branch_coverage=1 00:11:53.651 --rc genhtml_function_coverage=1 00:11:53.651 --rc genhtml_legend=1 00:11:53.651 --rc geninfo_all_blocks=1 00:11:53.651 --rc geninfo_unexecuted_blocks=1 00:11:53.651 00:11:53.651 ' 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:53.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.651 --rc genhtml_branch_coverage=1 00:11:53.651 --rc genhtml_function_coverage=1 00:11:53.651 --rc genhtml_legend=1 00:11:53.651 --rc geninfo_all_blocks=1 00:11:53.651 --rc geninfo_unexecuted_blocks=1 00:11:53.651 00:11:53.651 ' 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:53.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.651 --rc genhtml_branch_coverage=1 00:11:53.651 --rc genhtml_function_coverage=1 00:11:53.651 --rc genhtml_legend=1 00:11:53.651 --rc geninfo_all_blocks=1 00:11:53.651 --rc geninfo_unexecuted_blocks=1 00:11:53.651 00:11:53.651 ' 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.651 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:53.652 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:53.652 Cannot find device "nvmf_init_br" 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:53.652 Cannot find device "nvmf_init_br2" 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:53.652 Cannot find device "nvmf_tgt_br" 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:53.652 Cannot find device "nvmf_tgt_br2" 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:53.652 Cannot find device "nvmf_init_br" 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:53.652 Cannot find device "nvmf_init_br2" 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:53.652 Cannot find device "nvmf_tgt_br" 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:53.652 Cannot find device "nvmf_tgt_br2" 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:11:53.652 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:53.912 Cannot find device "nvmf_br" 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:53.912 Cannot find device "nvmf_init_if" 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:53.912 Cannot find device "nvmf_init_if2" 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:53.912 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:53.912 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:53.912 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:53.913 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:53.913 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:53.913 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:11:53.913 00:11:53.913 --- 10.0.0.3 ping statistics --- 00:11:53.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.913 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:11:53.913 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:53.913 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:53.913 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:11:53.913 00:11:53.913 --- 10.0.0.4 ping statistics --- 00:11:53.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.913 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:53.913 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:53.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:53.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:53.913 00:11:53.913 --- 10.0.0.1 ping statistics --- 00:11:53.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.913 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:53.913 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:53.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:53.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:11:53.913 00:11:53.913 --- 10.0.0.2 ping statistics --- 00:11:53.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:53.913 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:54.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=67027 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 67027 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 67027 ']' 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:54.173 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:54.173 [2024-09-28 08:49:32.065359] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:54.173 [2024-09-28 08:49:32.065782] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.432 [2024-09-28 08:49:32.246038] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.691 [2024-09-28 08:49:32.478871] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.691 [2024-09-28 08:49:32.478948] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.691 [2024-09-28 08:49:32.478998] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.691 [2024-09-28 08:49:32.479033] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.691 [2024-09-28 08:49:32.479051] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.691 [2024-09-28 08:49:32.479100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.691 [2024-09-28 08:49:32.667088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:55.258 08:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.258 08:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:55.258 08:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:55.258 08:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:55.258 08:49:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:55.258 [2024-09-28 08:49:33.019544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:55.258 Malloc0 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.258 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:55.258 [2024-09-28 08:49:33.109394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:55.259 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.259 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67059 00:11:55.259 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:55.259 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:55.259 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67059 /var/tmp/bdevperf.sock 00:11:55.259 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 67059 ']' 00:11:55.259 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:55.259 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:55.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:55.259 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:55.259 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:55.259 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:55.259 [2024-09-28 08:49:33.209422] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:55.259 [2024-09-28 08:49:33.209606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67059 ] 00:11:55.516 [2024-09-28 08:49:33.376640] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.774 [2024-09-28 08:49:33.605008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.033 [2024-09-28 08:49:33.775539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:56.292 08:49:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:56.292 08:49:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:56.292 08:49:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:56.292 08:49:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.293 08:49:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:56.293 NVMe0n1 00:11:56.293 08:49:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.293 08:49:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:56.551 Running I/O for 10 seconds... 00:12:06.643 5881.00 IOPS, 22.97 MiB/s 6329.00 IOPS, 24.72 MiB/s 6574.67 IOPS, 25.68 MiB/s 6764.50 IOPS, 26.42 MiB/s 6888.00 IOPS, 26.91 MiB/s 6959.83 IOPS, 27.19 MiB/s 6992.14 IOPS, 27.31 MiB/s 7007.62 IOPS, 27.37 MiB/s 7012.22 IOPS, 27.39 MiB/s 7056.10 IOPS, 27.56 MiB/s 00:12:06.643 Latency(us) 00:12:06.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.643 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:06.643 Verification LBA range: start 0x0 length 0x4000 00:12:06.643 NVMe0n1 : 10.11 7065.80 27.60 0.00 0.00 144090.17 24069.59 113913.48 00:12:06.643 =================================================================================================================== 00:12:06.643 Total : 7065.80 27.60 0.00 0.00 144090.17 24069.59 113913.48 00:12:06.643 { 00:12:06.643 "results": [ 00:12:06.643 { 00:12:06.643 "job": "NVMe0n1", 00:12:06.643 "core_mask": "0x1", 00:12:06.643 "workload": "verify", 00:12:06.643 "status": "finished", 00:12:06.643 "verify_range": { 00:12:06.643 "start": 0, 00:12:06.643 "length": 16384 00:12:06.643 }, 00:12:06.643 "queue_depth": 1024, 00:12:06.643 "io_size": 4096, 00:12:06.643 "runtime": 10.113217, 00:12:06.643 "iops": 7065.803097075836, 00:12:06.643 "mibps": 27.600793347952486, 00:12:06.643 "io_failed": 0, 00:12:06.643 "io_timeout": 0, 00:12:06.643 "avg_latency_us": 144090.17123019497, 00:12:06.643 "min_latency_us": 24069.585454545453, 00:12:06.643 "max_latency_us": 113913.48363636364 00:12:06.643 } 00:12:06.643 ], 00:12:06.643 "core_count": 1 00:12:06.643 } 00:12:06.643 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67059 00:12:06.643 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 67059 ']' 00:12:06.643 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 67059 00:12:06.643 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:12:06.643 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:06.643 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67059 00:12:06.643 killing process with pid 67059 00:12:06.643 Received shutdown signal, test time was about 10.000000 seconds 00:12:06.643 00:12:06.643 Latency(us) 00:12:06.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.643 =================================================================================================================== 00:12:06.643 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:06.643 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:06.643 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:06.643 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67059' 00:12:06.643 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 67059 00:12:06.643 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 67059 00:12:07.580 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:07.580 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:07.580 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:07.580 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:12:07.580 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:07.580 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:12:07.580 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:07.580 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:07.581 rmmod nvme_tcp 00:12:07.581 rmmod nvme_fabrics 00:12:07.581 rmmod nvme_keyring 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 67027 ']' 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 67027 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 67027 ']' 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 67027 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67027 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:07.581 killing process with pid 67027 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67027' 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 67027 00:12:07.581 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 67027 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:08.958 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:08.959 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.959 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.959 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.959 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:12:08.959 00:12:08.959 real 0m15.481s 00:12:08.959 user 0m25.820s 00:12:08.959 sys 0m2.362s 00:12:08.959 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:08.959 ************************************ 00:12:08.959 END TEST nvmf_queue_depth 00:12:08.959 ************************************ 00:12:08.959 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:08.959 08:49:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:08.959 08:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:08.959 08:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:08.959 08:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:08.959 ************************************ 00:12:08.959 START TEST nvmf_target_multipath 00:12:08.959 ************************************ 00:12:08.959 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:08.959 * Looking for test storage... 00:12:08.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:08.959 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:08.959 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:12:08.959 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.218 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:09.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.219 --rc genhtml_branch_coverage=1 00:12:09.219 --rc genhtml_function_coverage=1 00:12:09.219 --rc genhtml_legend=1 00:12:09.219 --rc geninfo_all_blocks=1 00:12:09.219 --rc geninfo_unexecuted_blocks=1 00:12:09.219 00:12:09.219 ' 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:09.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.219 --rc genhtml_branch_coverage=1 00:12:09.219 --rc genhtml_function_coverage=1 00:12:09.219 --rc genhtml_legend=1 00:12:09.219 --rc geninfo_all_blocks=1 00:12:09.219 --rc geninfo_unexecuted_blocks=1 00:12:09.219 00:12:09.219 ' 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:09.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.219 --rc genhtml_branch_coverage=1 00:12:09.219 --rc genhtml_function_coverage=1 00:12:09.219 --rc genhtml_legend=1 00:12:09.219 --rc geninfo_all_blocks=1 00:12:09.219 --rc geninfo_unexecuted_blocks=1 00:12:09.219 00:12:09.219 ' 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:09.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.219 --rc genhtml_branch_coverage=1 00:12:09.219 --rc genhtml_function_coverage=1 00:12:09.219 --rc genhtml_legend=1 00:12:09.219 --rc geninfo_all_blocks=1 00:12:09.219 --rc geninfo_unexecuted_blocks=1 00:12:09.219 00:12:09.219 ' 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:09.219 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:09.219 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:09.220 Cannot find device "nvmf_init_br" 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:09.220 Cannot find device "nvmf_init_br2" 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:09.220 Cannot find device "nvmf_tgt_br" 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:09.220 Cannot find device "nvmf_tgt_br2" 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:09.220 Cannot find device "nvmf_init_br" 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:09.220 Cannot find device "nvmf_init_br2" 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:09.220 Cannot find device "nvmf_tgt_br" 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:09.220 Cannot find device "nvmf_tgt_br2" 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:09.220 Cannot find device "nvmf_br" 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:09.220 Cannot find device "nvmf_init_if" 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:09.220 Cannot find device "nvmf_init_if2" 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:12:09.220 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:09.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.479 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:12:09.479 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:09.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:09.479 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:12:09.479 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:09.479 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:09.479 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:09.479 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:09.480 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:09.480 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:12:09.480 00:12:09.480 --- 10.0.0.3 ping statistics --- 00:12:09.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.480 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:09.480 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:09.480 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:12:09.480 00:12:09.480 --- 10.0.0.4 ping statistics --- 00:12:09.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.480 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:09.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:12:09.480 00:12:09.480 --- 10.0.0.1 ping statistics --- 00:12:09.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.480 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:09.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:12:09.480 00:12:09.480 --- 10.0.0.2 ping statistics --- 00:12:09.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.480 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:09.480 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:09.739 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=67451 00:12:09.739 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.739 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 67451 00:12:09.739 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 67451 ']' 00:12:09.739 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.739 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:09.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.739 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.739 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:09.739 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:09.739 [2024-09-28 08:49:47.599094] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:09.739 [2024-09-28 08:49:47.599881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.997 [2024-09-28 08:49:47.778491] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.255 [2024-09-28 08:49:48.015314] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.255 [2024-09-28 08:49:48.015394] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.255 [2024-09-28 08:49:48.015431] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.255 [2024-09-28 08:49:48.015447] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.255 [2024-09-28 08:49:48.015463] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.255 [2024-09-28 08:49:48.015690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.255 [2024-09-28 08:49:48.015785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.255 [2024-09-28 08:49:48.016684] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.255 [2024-09-28 08:49:48.016730] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.255 [2024-09-28 08:49:48.200451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:10.823 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:10.823 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:12:10.823 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:10.823 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:10.823 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:10.823 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.823 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:11.082 [2024-09-28 08:49:48.895483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.082 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:12:11.340 Malloc0 00:12:11.340 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:12:11.599 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.858 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:12.117 [2024-09-28 08:49:49.932562] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:12.117 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:12:12.375 [2024-09-28 08:49:50.180829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:12:12.375 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:12:12.375 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:12:12.634 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.634 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:12:12.634 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.634 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:12.634 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67546 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:14.539 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:12:14.539 [global] 00:12:14.539 thread=1 00:12:14.539 invalidate=1 00:12:14.539 rw=randrw 00:12:14.539 time_based=1 00:12:14.539 runtime=6 00:12:14.539 ioengine=libaio 00:12:14.539 direct=1 00:12:14.539 bs=4096 00:12:14.539 iodepth=128 00:12:14.539 norandommap=0 00:12:14.539 numjobs=1 00:12:14.539 00:12:14.539 verify_dump=1 00:12:14.539 verify_backlog=512 00:12:14.539 verify_state_save=0 00:12:14.539 do_verify=1 00:12:14.539 verify=crc32c-intel 00:12:14.539 [job0] 00:12:14.539 filename=/dev/nvme0n1 00:12:14.838 Could not set queue depth (nvme0n1) 00:12:14.838 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:14.838 fio-3.35 00:12:14.838 Starting 1 thread 00:12:15.774 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:16.033 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:12:16.291 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:12:16.291 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:16.291 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:16.291 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:16.291 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:16.291 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:16.291 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:12:16.291 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:16.291 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:16.291 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:16.291 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:16.291 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:16.291 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:16.550 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:12:16.809 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:12:16.809 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:16.809 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:16.809 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:16.809 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:16.809 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:16.809 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:12:16.809 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:16.809 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:16.809 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:16.809 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:16.809 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:16.809 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67546 00:12:20.998 00:12:20.998 job0: (groupid=0, jobs=1): err= 0: pid=67567: Sat Sep 28 08:49:58 2024 00:12:20.998 read: IOPS=8607, BW=33.6MiB/s (35.3MB/s)(202MiB/6008msec) 00:12:20.998 slat (usec): min=4, max=7594, avg=69.34, stdev=266.56 00:12:20.998 clat (usec): min=1911, max=21400, avg=10185.32, stdev=1740.33 00:12:20.998 lat (usec): min=1922, max=21415, avg=10254.66, stdev=1742.59 00:12:20.998 clat percentiles (usec): 00:12:20.998 | 1.00th=[ 5276], 5.00th=[ 7898], 10.00th=[ 8717], 20.00th=[ 9372], 00:12:20.998 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:12:20.998 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11600], 95.00th=[13829], 00:12:20.998 | 99.00th=[16057], 99.50th=[16581], 99.90th=[19006], 99.95th=[21103], 00:12:20.998 | 99.99th=[21365] 00:12:20.998 bw ( KiB/s): min= 3056, max=22840, per=50.44%, avg=17366.00, stdev=6665.83, samples=12 00:12:20.998 iops : min= 764, max= 5710, avg=4341.50, stdev=1666.46, samples=12 00:12:20.998 write: IOPS=5095, BW=19.9MiB/s (20.9MB/s)(102MiB/5133msec); 0 zone resets 00:12:20.998 slat (usec): min=16, max=5900, avg=80.89, stdev=207.55 00:12:20.998 clat (usec): min=2585, max=19724, avg=8996.08, stdev=1611.10 00:12:20.998 lat (usec): min=2613, max=19748, avg=9076.97, stdev=1618.29 00:12:20.998 clat percentiles (usec): 00:12:20.998 | 1.00th=[ 4113], 5.00th=[ 5407], 10.00th=[ 7373], 20.00th=[ 8291], 00:12:20.998 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:12:20.998 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10683], 00:12:20.998 | 99.00th=[13829], 99.50th=[14877], 99.90th=[18220], 99.95th=[18744], 00:12:20.998 | 99.99th=[19006] 00:12:20.998 bw ( KiB/s): min= 3264, max=22712, per=85.35%, avg=17395.33, stdev=6565.97, samples=12 00:12:20.998 iops : min= 816, max= 5678, avg=4348.83, stdev=1641.49, samples=12 00:12:20.998 lat (msec) : 2=0.01%, 4=0.39%, 10=59.98%, 20=39.56%, 50=0.06% 00:12:20.998 cpu : usr=4.99%, sys=20.19%, ctx=4559, majf=0, minf=102 00:12:20.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:20.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:20.998 issued rwts: total=51715,26154,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:20.998 00:12:20.998 Run status group 0 (all jobs): 00:12:20.998 READ: bw=33.6MiB/s (35.3MB/s), 33.6MiB/s-33.6MiB/s (35.3MB/s-35.3MB/s), io=202MiB (212MB), run=6008-6008msec 00:12:20.998 WRITE: bw=19.9MiB/s (20.9MB/s), 19.9MiB/s-19.9MiB/s (20.9MB/s-20.9MB/s), io=102MiB (107MB), run=5133-5133msec 00:12:20.998 00:12:20.998 Disk stats (read/write): 00:12:20.998 nvme0n1: ios=50961/25600, merge=0/0, ticks=499104/215965, in_queue=715069, util=98.60% 00:12:20.998 08:49:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:12:21.257 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67643 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:21.516 08:49:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:12:21.516 [global] 00:12:21.516 thread=1 00:12:21.516 invalidate=1 00:12:21.516 rw=randrw 00:12:21.516 time_based=1 00:12:21.516 runtime=6 00:12:21.516 ioengine=libaio 00:12:21.516 direct=1 00:12:21.516 bs=4096 00:12:21.516 iodepth=128 00:12:21.516 norandommap=0 00:12:21.516 numjobs=1 00:12:21.516 00:12:21.516 verify_dump=1 00:12:21.516 verify_backlog=512 00:12:21.516 verify_state_save=0 00:12:21.516 do_verify=1 00:12:21.516 verify=crc32c-intel 00:12:21.516 [job0] 00:12:21.516 filename=/dev/nvme0n1 00:12:21.516 Could not set queue depth (nvme0n1) 00:12:21.775 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:21.775 fio-3.35 00:12:21.775 Starting 1 thread 00:12:22.714 08:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:22.973 08:50:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:12:23.231 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:12:23.231 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:23.231 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:23.231 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:23.231 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:23.231 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:23.231 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:12:23.231 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:23.231 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:23.231 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:23.231 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:23.231 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:23.231 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:23.489 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:12:23.748 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:12:23.748 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:23.748 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:23.748 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:23.748 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:23.748 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:23.748 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:12:23.748 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:23.748 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:23.748 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:23.748 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:23.748 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:23.748 08:50:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67643 00:12:27.953 00:12:27.953 job0: (groupid=0, jobs=1): err= 0: pid=67670: Sat Sep 28 08:50:05 2024 00:12:27.953 read: IOPS=9608, BW=37.5MiB/s (39.4MB/s)(226MiB/6008msec) 00:12:27.953 slat (usec): min=3, max=7515, avg=52.47, stdev=235.83 00:12:27.953 clat (usec): min=671, max=18207, avg=9230.58, stdev=2440.46 00:12:27.953 lat (usec): min=683, max=18217, avg=9283.05, stdev=2459.89 00:12:27.953 clat percentiles (usec): 00:12:27.953 | 1.00th=[ 3195], 5.00th=[ 4621], 10.00th=[ 5735], 20.00th=[ 7177], 00:12:27.953 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10028], 00:12:27.953 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11469], 95.00th=[12780], 00:12:27.953 | 99.00th=[15926], 99.50th=[16188], 99.90th=[16909], 99.95th=[17171], 00:12:27.953 | 99.99th=[17695] 00:12:27.953 bw ( KiB/s): min= 4816, max=31104, per=50.74%, avg=19502.00, stdev=7029.93, samples=12 00:12:27.953 iops : min= 1204, max= 7776, avg=4875.50, stdev=1757.48, samples=12 00:12:27.953 write: IOPS=5627, BW=22.0MiB/s (23.1MB/s)(115MiB/5224msec); 0 zone resets 00:12:27.953 slat (usec): min=11, max=2473, avg=63.60, stdev=176.25 00:12:27.953 clat (usec): min=660, max=17039, avg=7817.50, stdev=2382.49 00:12:27.953 lat (usec): min=697, max=17067, avg=7881.10, stdev=2404.20 00:12:27.953 clat percentiles (usec): 00:12:27.953 | 1.00th=[ 2737], 5.00th=[ 3752], 10.00th=[ 4359], 20.00th=[ 5211], 00:12:27.953 | 30.00th=[ 6063], 40.00th=[ 7898], 50.00th=[ 8717], 60.00th=[ 9110], 00:12:27.953 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10552], 00:12:27.953 | 99.00th=[13173], 99.50th=[14222], 99.90th=[15664], 99.95th=[15926], 00:12:27.953 | 99.99th=[16712] 00:12:27.953 bw ( KiB/s): min= 5032, max=30576, per=86.89%, avg=19560.00, stdev=6874.88, samples=12 00:12:27.953 iops : min= 1258, max= 7644, avg=4890.00, stdev=1718.72, samples=12 00:12:27.953 lat (usec) : 750=0.01%, 1000=0.01% 00:12:27.953 lat (msec) : 2=0.19%, 4=3.75%, 10=64.88%, 20=31.16% 00:12:27.953 cpu : usr=5.21%, sys=20.04%, ctx=4867, majf=0, minf=114 00:12:27.953 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:27.953 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.953 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:27.953 issued rwts: total=57729,29398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.953 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:27.953 00:12:27.953 Run status group 0 (all jobs): 00:12:27.953 READ: bw=37.5MiB/s (39.4MB/s), 37.5MiB/s-37.5MiB/s (39.4MB/s-39.4MB/s), io=226MiB (236MB), run=6008-6008msec 00:12:27.953 WRITE: bw=22.0MiB/s (23.1MB/s), 22.0MiB/s-22.0MiB/s (23.1MB/s-23.1MB/s), io=115MiB (120MB), run=5224-5224msec 00:12:27.953 00:12:27.953 Disk stats (read/write): 00:12:27.953 nvme0n1: ios=57263/28724, merge=0/0, ticks=506160/210862, in_queue=717022, util=98.63% 00:12:27.953 08:50:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:27.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:27.953 08:50:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:27.953 08:50:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:12:27.953 08:50:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:27.953 08:50:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.953 08:50:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:27.953 08:50:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:27.953 08:50:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:12:27.953 08:50:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.520 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:12:28.520 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:12:28.520 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:12:28.520 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:12:28.520 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:28.520 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:28.520 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:28.520 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:28.520 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:28.520 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:28.520 rmmod nvme_tcp 00:12:28.520 rmmod nvme_fabrics 00:12:28.520 rmmod nvme_keyring 00:12:28.520 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:28.520 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:28.520 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:28.521 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 67451 ']' 00:12:28.521 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 67451 00:12:28.521 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 67451 ']' 00:12:28.521 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 67451 00:12:28.521 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:12:28.521 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:28.521 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67451 00:12:28.521 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:28.521 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:28.521 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67451' 00:12:28.521 killing process with pid 67451 00:12:28.521 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 67451 00:12:28.521 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 67451 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:12:29.901 ************************************ 00:12:29.901 END TEST nvmf_target_multipath 00:12:29.901 ************************************ 00:12:29.901 00:12:29.901 real 0m20.970s 00:12:29.901 user 1m15.928s 00:12:29.901 sys 0m9.718s 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:29.901 ************************************ 00:12:29.901 START TEST nvmf_zcopy 00:12:29.901 ************************************ 00:12:29.901 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:30.161 * Looking for test storage... 00:12:30.161 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:30.161 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:30.161 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:12:30.161 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:30.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.161 --rc genhtml_branch_coverage=1 00:12:30.161 --rc genhtml_function_coverage=1 00:12:30.161 --rc genhtml_legend=1 00:12:30.161 --rc geninfo_all_blocks=1 00:12:30.161 --rc geninfo_unexecuted_blocks=1 00:12:30.161 00:12:30.161 ' 00:12:30.161 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:30.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.161 --rc genhtml_branch_coverage=1 00:12:30.161 --rc genhtml_function_coverage=1 00:12:30.161 --rc genhtml_legend=1 00:12:30.162 --rc geninfo_all_blocks=1 00:12:30.162 --rc geninfo_unexecuted_blocks=1 00:12:30.162 00:12:30.162 ' 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:30.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.162 --rc genhtml_branch_coverage=1 00:12:30.162 --rc genhtml_function_coverage=1 00:12:30.162 --rc genhtml_legend=1 00:12:30.162 --rc geninfo_all_blocks=1 00:12:30.162 --rc geninfo_unexecuted_blocks=1 00:12:30.162 00:12:30.162 ' 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:30.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.162 --rc genhtml_branch_coverage=1 00:12:30.162 --rc genhtml_function_coverage=1 00:12:30.162 --rc genhtml_legend=1 00:12:30.162 --rc geninfo_all_blocks=1 00:12:30.162 --rc geninfo_unexecuted_blocks=1 00:12:30.162 00:12:30.162 ' 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.162 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:30.162 Cannot find device "nvmf_init_br" 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:30.162 Cannot find device "nvmf_init_br2" 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:30.162 Cannot find device "nvmf_tgt_br" 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:30.162 Cannot find device "nvmf_tgt_br2" 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:30.162 Cannot find device "nvmf_init_br" 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:12:30.162 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:30.422 Cannot find device "nvmf_init_br2" 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:30.422 Cannot find device "nvmf_tgt_br" 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:30.422 Cannot find device "nvmf_tgt_br2" 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:30.422 Cannot find device "nvmf_br" 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:30.422 Cannot find device "nvmf_init_if" 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:30.422 Cannot find device "nvmf_init_if2" 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:30.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:30.422 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:30.422 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:30.423 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:30.423 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:30.423 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:30.423 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:30.423 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:30.423 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:30.423 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:30.423 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:30.683 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:30.683 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:30.683 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:30.683 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:30.684 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:30.684 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:12:30.684 00:12:30.684 --- 10.0.0.3 ping statistics --- 00:12:30.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.684 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:30.684 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:30.684 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:12:30.684 00:12:30.684 --- 10.0.0.4 ping statistics --- 00:12:30.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.684 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:30.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:30.684 00:12:30.684 --- 10.0.0.1 ping statistics --- 00:12:30.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.684 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:30.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:12:30.684 00:12:30.684 --- 10.0.0.2 ping statistics --- 00:12:30.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.684 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:30.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=67985 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 67985 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 67985 ']' 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:30.684 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:30.684 [2024-09-28 08:50:08.634049] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:30.684 [2024-09-28 08:50:08.634489] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.942 [2024-09-28 08:50:08.811106] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.202 [2024-09-28 08:50:09.038956] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.202 [2024-09-28 08:50:09.039378] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.202 [2024-09-28 08:50:09.039415] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.202 [2024-09-28 08:50:09.039435] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.202 [2024-09-28 08:50:09.039460] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.202 [2024-09-28 08:50:09.039501] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.461 [2024-09-28 08:50:09.212938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:31.720 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.720 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:12:31.720 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:12:31.720 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:31.720 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.980 [2024-09-28 08:50:09.726934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.980 [2024-09-28 08:50:09.743231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.980 malloc0 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:12:31.980 { 00:12:31.980 "params": { 00:12:31.980 "name": "Nvme$subsystem", 00:12:31.980 "trtype": "$TEST_TRANSPORT", 00:12:31.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:31.980 "adrfam": "ipv4", 00:12:31.980 "trsvcid": "$NVMF_PORT", 00:12:31.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:31.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:31.980 "hdgst": ${hdgst:-false}, 00:12:31.980 "ddgst": ${ddgst:-false} 00:12:31.980 }, 00:12:31.980 "method": "bdev_nvme_attach_controller" 00:12:31.980 } 00:12:31.980 EOF 00:12:31.980 )") 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:12:31.980 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:12:31.980 "params": { 00:12:31.980 "name": "Nvme1", 00:12:31.980 "trtype": "tcp", 00:12:31.980 "traddr": "10.0.0.3", 00:12:31.980 "adrfam": "ipv4", 00:12:31.980 "trsvcid": "4420", 00:12:31.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:31.980 "hdgst": false, 00:12:31.980 "ddgst": false 00:12:31.980 }, 00:12:31.980 "method": "bdev_nvme_attach_controller" 00:12:31.980 }' 00:12:31.980 [2024-09-28 08:50:09.913789] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:31.980 [2024-09-28 08:50:09.914268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68024 ] 00:12:32.239 [2024-09-28 08:50:10.080497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.498 [2024-09-28 08:50:10.274682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.498 [2024-09-28 08:50:10.447284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:32.757 Running I/O for 10 seconds... 00:12:43.032 5068.00 IOPS, 39.59 MiB/s 4961.00 IOPS, 38.76 MiB/s 4972.67 IOPS, 38.85 MiB/s 5024.50 IOPS, 39.25 MiB/s 5051.80 IOPS, 39.47 MiB/s 5083.00 IOPS, 39.71 MiB/s 5107.29 IOPS, 39.90 MiB/s 5119.62 IOPS, 40.00 MiB/s 5127.78 IOPS, 40.06 MiB/s 5130.00 IOPS, 40.08 MiB/s 00:12:43.032 Latency(us) 00:12:43.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.032 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:43.032 Verification LBA range: start 0x0 length 0x1000 00:12:43.032 Nvme1n1 : 10.02 5132.73 40.10 0.00 0.00 24870.90 2710.81 33125.47 00:12:43.032 =================================================================================================================== 00:12:43.032 Total : 5132.73 40.10 0.00 0.00 24870.90 2710.81 33125.47 00:12:43.981 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=68153 00:12:43.981 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:43.981 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:43.981 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:43.981 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:43.981 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:12:43.981 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:12:43.981 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:12:43.981 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:12:43.981 { 00:12:43.981 "params": { 00:12:43.981 "name": "Nvme$subsystem", 00:12:43.981 "trtype": "$TEST_TRANSPORT", 00:12:43.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:43.981 "adrfam": "ipv4", 00:12:43.981 "trsvcid": "$NVMF_PORT", 00:12:43.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:43.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:43.981 "hdgst": ${hdgst:-false}, 00:12:43.981 "ddgst": ${ddgst:-false} 00:12:43.981 }, 00:12:43.981 "method": "bdev_nvme_attach_controller" 00:12:43.981 } 00:12:43.981 EOF 00:12:43.981 )") 00:12:43.981 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:12:43.981 [2024-09-28 08:50:21.628010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.981 [2024-09-28 08:50:21.628081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.981 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:12:43.981 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:12:43.981 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:12:43.981 "params": { 00:12:43.981 "name": "Nvme1", 00:12:43.981 "trtype": "tcp", 00:12:43.981 "traddr": "10.0.0.3", 00:12:43.981 "adrfam": "ipv4", 00:12:43.981 "trsvcid": "4420", 00:12:43.981 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:43.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:43.981 "hdgst": false, 00:12:43.981 "ddgst": false 00:12:43.981 }, 00:12:43.981 "method": "bdev_nvme_attach_controller" 00:12:43.981 }' 00:12:43.981 [2024-09-28 08:50:21.640000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.981 [2024-09-28 08:50:21.640060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.981 [2024-09-28 08:50:21.651940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.981 [2024-09-28 08:50:21.651992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.981 [2024-09-28 08:50:21.663947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.981 [2024-09-28 08:50:21.664029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.981 [2024-09-28 08:50:21.675974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.981 [2024-09-28 08:50:21.676028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.981 [2024-09-28 08:50:21.687970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.981 [2024-09-28 08:50:21.688029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.981 [2024-09-28 08:50:21.699947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.981 [2024-09-28 08:50:21.699983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.981 [2024-09-28 08:50:21.711946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.981 [2024-09-28 08:50:21.712023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.981 [2024-09-28 08:50:21.723937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.981 [2024-09-28 08:50:21.723986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.981 [2024-09-28 08:50:21.735293] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:43.981 [2024-09-28 08:50:21.735465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68153 ] 00:12:43.981 [2024-09-28 08:50:21.736043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.981 [2024-09-28 08:50:21.736071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.747993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.748061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.759967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.760021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.771979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.772030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.783983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.784039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.796013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.796064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.808000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.808057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.819967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.820019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.832032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.832100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.844040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.844108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.856043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.856114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.868014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.868063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.880013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.880083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.892038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.892089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.904079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.904141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.908441] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.982 [2024-09-28 08:50:21.916045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.916101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.928056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.928110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.940044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.940094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.952043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.952097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:43.982 [2024-09-28 08:50:21.964056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:43.982 [2024-09-28 08:50:21.964108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.241 [2024-09-28 08:50:21.976058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.241 [2024-09-28 08:50:21.976114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.241 [2024-09-28 08:50:21.988160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.241 [2024-09-28 08:50:21.988258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.241 [2024-09-28 08:50:22.000152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.241 [2024-09-28 08:50:22.000255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.241 [2024-09-28 08:50:22.012085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.241 [2024-09-28 08:50:22.012144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.241 [2024-09-28 08:50:22.024112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.241 [2024-09-28 08:50:22.024171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.241 [2024-09-28 08:50:22.036130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.241 [2024-09-28 08:50:22.036173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.241 [2024-09-28 08:50:22.048216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.241 [2024-09-28 08:50:22.048311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.241 [2024-09-28 08:50:22.060159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.241 [2024-09-28 08:50:22.060238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.241 [2024-09-28 08:50:22.072096] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.241 [2024-09-28 08:50:22.072150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.241 [2024-09-28 08:50:22.082127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.242 [2024-09-28 08:50:22.084106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.242 [2024-09-28 08:50:22.084156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.242 [2024-09-28 08:50:22.096142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.242 [2024-09-28 08:50:22.096216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.242 [2024-09-28 08:50:22.108105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.242 [2024-09-28 08:50:22.108158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.242 [2024-09-28 08:50:22.120114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.242 [2024-09-28 08:50:22.120183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.242 [2024-09-28 08:50:22.132097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.242 [2024-09-28 08:50:22.132148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.242 [2024-09-28 08:50:22.144121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.242 [2024-09-28 08:50:22.144190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.242 [2024-09-28 08:50:22.156225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.242 [2024-09-28 08:50:22.156299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.242 [2024-09-28 08:50:22.168194] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.242 [2024-09-28 08:50:22.168254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.242 [2024-09-28 08:50:22.180159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.242 [2024-09-28 08:50:22.180224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.242 [2024-09-28 08:50:22.192186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.242 [2024-09-28 08:50:22.192259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.242 [2024-09-28 08:50:22.204148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.242 [2024-09-28 08:50:22.204199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.242 [2024-09-28 08:50:22.216169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.242 [2024-09-28 08:50:22.216238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.242 [2024-09-28 08:50:22.228157] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.242 [2024-09-28 08:50:22.228221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.240218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.240295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.252212] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.252262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.259093] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:44.562 [2024-09-28 08:50:22.264236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.264294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.276374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.276437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.288281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.288341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.300264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.300314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.312322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.312370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.324225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.324288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.336236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.336290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.348335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.348374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.360308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.360378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.372290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.372345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.384288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.384343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.396307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.396363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.408319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.408372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.420310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.420366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.432390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.432461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.444358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.444418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 Running I/O for 5 seconds... 00:12:44.562 [2024-09-28 08:50:22.461002] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.461065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.477658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.477714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.493844] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.493935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.511121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.511175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.527120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.527180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.562 [2024-09-28 08:50:22.544753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.562 [2024-09-28 08:50:22.544818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.560174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.560248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.574964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.575030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.591687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.591747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.607438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.607513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.624328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.624404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.641238] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.641296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.656515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.656574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.672080] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.672136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.682879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.682938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.699101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.699156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.714425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.714484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.725895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.725948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.741627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.741731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.756970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.757043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.773523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.773582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.790676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.790733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:44.821 [2024-09-28 08:50:22.806645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:44.821 [2024-09-28 08:50:22.806721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:22.823162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:22.823234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:22.834016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:22.834076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:22.851172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:22.851229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:22.865923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:22.866063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:22.884166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:22.884240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:22.900339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:22.900398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:22.917562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:22.917617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:22.934261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:22.934335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:22.950741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:22.950838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:22.968533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:22.968591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:22.984539] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:22.984595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:23.002020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:23.002095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:23.018570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:23.018627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:23.034487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:23.034555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:23.051322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:23.051379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.081 [2024-09-28 08:50:23.068540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.081 [2024-09-28 08:50:23.068615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.340 [2024-09-28 08:50:23.084767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.340 [2024-09-28 08:50:23.084848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.340 [2024-09-28 08:50:23.095997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.340 [2024-09-28 08:50:23.096057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.340 [2024-09-28 08:50:23.112594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.340 [2024-09-28 08:50:23.112649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.340 [2024-09-28 08:50:23.125898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.340 [2024-09-28 08:50:23.125954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.340 [2024-09-28 08:50:23.142163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.340 [2024-09-28 08:50:23.142220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.340 [2024-09-28 08:50:23.157359] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.340 [2024-09-28 08:50:23.157417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.340 [2024-09-28 08:50:23.168663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.340 [2024-09-28 08:50:23.168763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.340 [2024-09-28 08:50:23.185211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.340 [2024-09-28 08:50:23.185274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.340 [2024-09-28 08:50:23.200776] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.340 [2024-09-28 08:50:23.200850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.340 [2024-09-28 08:50:23.213098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.341 [2024-09-28 08:50:23.213175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.341 [2024-09-28 08:50:23.225432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.341 [2024-09-28 08:50:23.225488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.341 [2024-09-28 08:50:23.241440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.341 [2024-09-28 08:50:23.241518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.341 [2024-09-28 08:50:23.258024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.341 [2024-09-28 08:50:23.258080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.341 [2024-09-28 08:50:23.274381] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.341 [2024-09-28 08:50:23.274455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.341 [2024-09-28 08:50:23.291655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.341 [2024-09-28 08:50:23.291711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.341 [2024-09-28 08:50:23.306299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.341 [2024-09-28 08:50:23.306374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.341 [2024-09-28 08:50:23.321701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.341 [2024-09-28 08:50:23.321763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.341 [2024-09-28 08:50:23.334933] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.341 [2024-09-28 08:50:23.335040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.599 [2024-09-28 08:50:23.352771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.599 [2024-09-28 08:50:23.352835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.599 [2024-09-28 08:50:23.368299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.599 [2024-09-28 08:50:23.368374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.600 [2024-09-28 08:50:23.379837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.600 [2024-09-28 08:50:23.379906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.600 [2024-09-28 08:50:23.396464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.600 [2024-09-28 08:50:23.396540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.600 [2024-09-28 08:50:23.412809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.600 [2024-09-28 08:50:23.412872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.600 [2024-09-28 08:50:23.428698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.600 [2024-09-28 08:50:23.428778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.600 [2024-09-28 08:50:23.444005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.600 [2024-09-28 08:50:23.444068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.600 9664.00 IOPS, 75.50 MiB/s [2024-09-28 08:50:23.458860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.600 [2024-09-28 08:50:23.458920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.600 [2024-09-28 08:50:23.475041] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.600 [2024-09-28 08:50:23.475098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.600 [2024-09-28 08:50:23.492597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.600 [2024-09-28 08:50:23.492698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.600 [2024-09-28 08:50:23.509183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.600 [2024-09-28 08:50:23.509240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.600 [2024-09-28 08:50:23.526315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.600 [2024-09-28 08:50:23.526374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.600 [2024-09-28 08:50:23.542808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.600 [2024-09-28 08:50:23.542890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.600 [2024-09-28 08:50:23.559771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.600 [2024-09-28 08:50:23.559861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.600 [2024-09-28 08:50:23.575350] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.600 [2024-09-28 08:50:23.575405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.600 [2024-09-28 08:50:23.592109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.600 [2024-09-28 08:50:23.592185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.608085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.608140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.624150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.624225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.641108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.641163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.658403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.658462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.673210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.673265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.688763] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.688841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.700067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.700122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.713715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.713774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.730225] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.730281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.744859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.744939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.760137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.760194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.775259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.775335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.791827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.791900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.809224] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.809285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.825660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.825717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:45.859 [2024-09-28 08:50:23.842346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:45.859 [2024-09-28 08:50:23.842402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:23.859423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:23.859481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:23.876064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:23.876121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:23.893302] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:23.893371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:23.908918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:23.908992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:23.925119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:23.925171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:23.942299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:23.942353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:23.959530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:23.959584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:23.975422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:23.975478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:23.993145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:23.993201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:24.009173] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:24.009259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:24.019980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:24.020036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:24.035764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:24.035861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:24.051778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:24.051863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:24.067757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:24.067840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:24.083495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:24.083551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.119 [2024-09-28 08:50:24.099514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.119 [2024-09-28 08:50:24.099579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.118071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.118143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.134348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.134403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.151198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.151253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.167491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.167548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.183632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.183705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.195260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.195316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.210812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.210898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.226541] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.226598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.244256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.244316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.259725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.259783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.275022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.275078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.292227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.292284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.306842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.306911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.322545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.322602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.339875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.339940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.355675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.355746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.379 [2024-09-28 08:50:24.366318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.379 [2024-09-28 08:50:24.366375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.383287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.383342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.399063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.399117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.416311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.416367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.432410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.432469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.448559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.448631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 9765.00 IOPS, 76.29 MiB/s [2024-09-28 08:50:24.461514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.461587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.479810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.479877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.495533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.495591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.512305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.512365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.524928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.524975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.544003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.544082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.560820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.560898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.577391] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.577448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.590091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.590148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.608297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.608374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.638 [2024-09-28 08:50:24.624425] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.638 [2024-09-28 08:50:24.624480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.640585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.640642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.651437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.651492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.667322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.667378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.682482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.682537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.698073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.698146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.708854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.708910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.725550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.725607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.741745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.741833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.757718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.757790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.768893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.768951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.784908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.784968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.800047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.800136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.811760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.811830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.827529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.827585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.843394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.843450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.860214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.860270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.875345] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.875402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:46.898 [2024-09-28 08:50:24.891136] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:46.898 [2024-09-28 08:50:24.891236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:24.903857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:24.903964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:24.923685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:24.923746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:24.939979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:24.940035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:24.957492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:24.957547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:24.972129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:24.972185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:24.988573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:24.988629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:25.003905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:25.003961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:25.018963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:25.019019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:25.036392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:25.036449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:25.051500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:25.051556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:25.062887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:25.062944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:25.080290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:25.080335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:25.096456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:25.096512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:25.112698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:25.112759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:25.131507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:25.131581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.158 [2024-09-28 08:50:25.146328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.158 [2024-09-28 08:50:25.146385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.417 [2024-09-28 08:50:25.161429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.417 [2024-09-28 08:50:25.161483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.417 [2024-09-28 08:50:25.178239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.417 [2024-09-28 08:50:25.178294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.417 [2024-09-28 08:50:25.194260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.417 [2024-09-28 08:50:25.194316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.418 [2024-09-28 08:50:25.205027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.418 [2024-09-28 08:50:25.205096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.418 [2024-09-28 08:50:25.221442] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.418 [2024-09-28 08:50:25.221499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.418 [2024-09-28 08:50:25.236722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.418 [2024-09-28 08:50:25.236767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.418 [2024-09-28 08:50:25.254294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.418 [2024-09-28 08:50:25.254364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.418 [2024-09-28 08:50:25.269724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.418 [2024-09-28 08:50:25.269779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.418 [2024-09-28 08:50:25.285732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.418 [2024-09-28 08:50:25.285787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.418 [2024-09-28 08:50:25.302789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.418 [2024-09-28 08:50:25.302895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.418 [2024-09-28 08:50:25.318544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.418 [2024-09-28 08:50:25.318612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.418 [2024-09-28 08:50:25.334335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.418 [2024-09-28 08:50:25.334392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.418 [2024-09-28 08:50:25.345576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.418 [2024-09-28 08:50:25.345617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.418 [2024-09-28 08:50:25.361596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.418 [2024-09-28 08:50:25.361653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.418 [2024-09-28 08:50:25.376457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.418 [2024-09-28 08:50:25.376513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.418 [2024-09-28 08:50:25.387679] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.418 [2024-09-28 08:50:25.387735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.418 [2024-09-28 08:50:25.404166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.418 [2024-09-28 08:50:25.404224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.419651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.419721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.436443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.436500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 9746.33 IOPS, 76.14 MiB/s [2024-09-28 08:50:25.453992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.454045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.468508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.468564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.485248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.485303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.501412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.501468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.518266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.518324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.533936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.533990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.550701] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.550744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.566581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.566639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.582553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.582631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.595069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.595120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.610223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.610284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.624078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.624136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.640550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.640625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.678 [2024-09-28 08:50:25.656895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.678 [2024-09-28 08:50:25.656957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.937 [2024-09-28 08:50:25.674076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.937 [2024-09-28 08:50:25.674121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.689864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.689952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.702400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.702456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.719562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.719618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.734613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.734669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.749603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.749658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.765131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.765216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.776156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.776227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.792395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.792450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.808015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.808071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.824948] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.825041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.841606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.841663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.858348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.858404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.869293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.869348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.885342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.885384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.900363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.900436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.916252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.916320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:47.938 [2024-09-28 08:50:25.929079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:47.938 [2024-09-28 08:50:25.929141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.197 [2024-09-28 08:50:25.947394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.197 [2024-09-28 08:50:25.947451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.197 [2024-09-28 08:50:25.964003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.197 [2024-09-28 08:50:25.964059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.197 [2024-09-28 08:50:25.976323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.197 [2024-09-28 08:50:25.976380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.197 [2024-09-28 08:50:25.993346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.197 [2024-09-28 08:50:25.993403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.197 [2024-09-28 08:50:26.009489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.197 [2024-09-28 08:50:26.009562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.197 [2024-09-28 08:50:26.026503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.197 [2024-09-28 08:50:26.026560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.197 [2024-09-28 08:50:26.043871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.197 [2024-09-28 08:50:26.043928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.197 [2024-09-28 08:50:26.058976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.197 [2024-09-28 08:50:26.059060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.197 [2024-09-28 08:50:26.075651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.197 [2024-09-28 08:50:26.075716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.197 [2024-09-28 08:50:26.091980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.197 [2024-09-28 08:50:26.092041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.198 [2024-09-28 08:50:26.107646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.198 [2024-09-28 08:50:26.107703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.198 [2024-09-28 08:50:26.124378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.198 [2024-09-28 08:50:26.124434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.198 [2024-09-28 08:50:26.140920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.198 [2024-09-28 08:50:26.140979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.198 [2024-09-28 08:50:26.158162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.198 [2024-09-28 08:50:26.158234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.198 [2024-09-28 08:50:26.173503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.198 [2024-09-28 08:50:26.173593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.198 [2024-09-28 08:50:26.189727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.198 [2024-09-28 08:50:26.189784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.206960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.207016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.223156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.223228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.240303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.240360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.256728] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.256790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.273676] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.273732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.289249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.289305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.300690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.300766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.317637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.317694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.332648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.332747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.347368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.347424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.364368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.364460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.380823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.380894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.397568] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.397624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.413694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.413737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.429947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.430028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.457 [2024-09-28 08:50:26.440543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.457 [2024-09-28 08:50:26.440611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.716 9737.25 IOPS, 76.07 MiB/s [2024-09-28 08:50:26.457767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.457852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.472718] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.472776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.487618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.487674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.504161] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.504234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.521129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.521185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.537920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.537977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.553254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.553310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.569721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.569800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.586072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.586129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.602254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.602310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.619160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.619264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.634649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.634707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.650960] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.651017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.667916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.667973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.684151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.684223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.717 [2024-09-28 08:50:26.701208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.717 [2024-09-28 08:50:26.701267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.718427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.718484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.734260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.734320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.750503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.750607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.762544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.762605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.780105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.780153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.796861] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.796921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.812658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.812717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.825057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.825163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.841436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.841494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.857308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.857364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.874602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.874659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.889613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.889678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.906278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.906321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.921084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.921142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.936883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.936929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.948466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.948525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:48.976 [2024-09-28 08:50:26.966414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:48.976 [2024-09-28 08:50:26.966492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.235 [2024-09-28 08:50:26.983305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.235 [2024-09-28 08:50:26.983361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.235 [2024-09-28 08:50:26.999894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.235 [2024-09-28 08:50:26.999951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.235 [2024-09-28 08:50:27.015413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.235 [2024-09-28 08:50:27.015469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.235 [2024-09-28 08:50:27.031525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.235 [2024-09-28 08:50:27.031581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.235 [2024-09-28 08:50:27.048786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.235 [2024-09-28 08:50:27.048857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.235 [2024-09-28 08:50:27.064985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.235 [2024-09-28 08:50:27.065099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.235 [2024-09-28 08:50:27.082007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.235 [2024-09-28 08:50:27.082066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.235 [2024-09-28 08:50:27.099152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.235 [2024-09-28 08:50:27.099220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.235 [2024-09-28 08:50:27.115075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.235 [2024-09-28 08:50:27.115133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.235 [2024-09-28 08:50:27.128134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.235 [2024-09-28 08:50:27.128193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.235 [2024-09-28 08:50:27.146303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.236 [2024-09-28 08:50:27.146359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.236 [2024-09-28 08:50:27.162484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.236 [2024-09-28 08:50:27.162540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.236 [2024-09-28 08:50:27.179931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.236 [2024-09-28 08:50:27.179988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.236 [2024-09-28 08:50:27.196102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.236 [2024-09-28 08:50:27.196159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.236 [2024-09-28 08:50:27.213595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.236 [2024-09-28 08:50:27.213653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.236 [2024-09-28 08:50:27.229291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.236 [2024-09-28 08:50:27.229334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.246441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.246486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.263404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.263448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.280622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.280689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.296703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.296747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.307924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.307965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.321714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.321755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.337031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.337088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.353694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.353736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.370596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.370652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.387531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.387587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.402574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.402630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.418979] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.419034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.435928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.435984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.451615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.451672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 9741.80 IOPS, 76.11 MiB/s [2024-09-28 08:50:27.462333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.462389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 00:12:49.495 Latency(us) 00:12:49.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.495 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:49.495 Nvme1n1 : 5.01 9745.17 76.13 0.00 0.00 13118.65 4766.25 23235.49 00:12:49.495 =================================================================================================================== 00:12:49.495 Total : 9745.17 76.13 0.00 0.00 13118.65 4766.25 23235.49 00:12:49.495 [2024-09-28 08:50:27.472040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.472105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.495 [2024-09-28 08:50:27.483919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.495 [2024-09-28 08:50:27.483973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.754 [2024-09-28 08:50:27.495929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.754 [2024-09-28 08:50:27.495983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.754 [2024-09-28 08:50:27.507972] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.754 [2024-09-28 08:50:27.508034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.754 [2024-09-28 08:50:27.519984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.754 [2024-09-28 08:50:27.520054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.754 [2024-09-28 08:50:27.531991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.754 [2024-09-28 08:50:27.532044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.543927] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.543978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.556000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.556064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.567944] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.567996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.580001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.580075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.591996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.592051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.603962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.604015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.615940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.615991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.627963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.628015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.639989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.640048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.651993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.652050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.663970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.664021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.676003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.676068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.688015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.688052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.700086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.700157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.712082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.712153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.724026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.724079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.735999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.736050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:49.755 [2024-09-28 08:50:27.748035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:49.755 [2024-09-28 08:50:27.748089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.760131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.760225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.772058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.772121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.784051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.784102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.796052] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.796104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.808120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.808206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.820121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.820177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.832054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.832106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.844120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.844157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.856204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.856278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.868181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.868247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.880155] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.880238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.892165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.892235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.904184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.904253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.916160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.916242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.928167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.928234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.940207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.940260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.952168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.952235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.964195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.964247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.976201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.976237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:27.988246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:27.988299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.015 [2024-09-28 08:50:28.000195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.015 [2024-09-28 08:50:28.000248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.274 [2024-09-28 08:50:28.012248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.274 [2024-09-28 08:50:28.012305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.274 [2024-09-28 08:50:28.024258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.274 [2024-09-28 08:50:28.024339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.274 [2024-09-28 08:50:28.036233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.274 [2024-09-28 08:50:28.036286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.274 [2024-09-28 08:50:28.048233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.274 [2024-09-28 08:50:28.048285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.274 [2024-09-28 08:50:28.060204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.274 [2024-09-28 08:50:28.060255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.274 [2024-09-28 08:50:28.072241] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.274 [2024-09-28 08:50:28.072291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.274 [2024-09-28 08:50:28.084311] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.274 [2024-09-28 08:50:28.084386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.274 [2024-09-28 08:50:28.096269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.274 [2024-09-28 08:50:28.096329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.274 [2024-09-28 08:50:28.108317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.274 [2024-09-28 08:50:28.108381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.274 [2024-09-28 08:50:28.120233] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.274 [2024-09-28 08:50:28.120284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.274 [2024-09-28 08:50:28.132253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.274 [2024-09-28 08:50:28.132303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.274 [2024-09-28 08:50:28.144269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.274 [2024-09-28 08:50:28.144305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.275 [2024-09-28 08:50:28.156282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.275 [2024-09-28 08:50:28.156332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.275 [2024-09-28 08:50:28.168295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.275 [2024-09-28 08:50:28.168345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.275 [2024-09-28 08:50:28.180353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.275 [2024-09-28 08:50:28.180416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.275 [2024-09-28 08:50:28.192258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.275 [2024-09-28 08:50:28.192308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.275 [2024-09-28 08:50:28.204281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.275 [2024-09-28 08:50:28.204332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.275 [2024-09-28 08:50:28.216265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.275 [2024-09-28 08:50:28.216315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.275 [2024-09-28 08:50:28.228352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.275 [2024-09-28 08:50:28.228421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.275 [2024-09-28 08:50:28.240348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.275 [2024-09-28 08:50:28.240401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.275 [2024-09-28 08:50:28.252300] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.275 [2024-09-28 08:50:28.252351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.275 [2024-09-28 08:50:28.264316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.275 [2024-09-28 08:50:28.264358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.276396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.276473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.288281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.288333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.300320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.300373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.312319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.312374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.324344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.324395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.336320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.336371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.348330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.348381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.360347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.360400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.372416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.372488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.384366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.384425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.396380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.396435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.408344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.408395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.420383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.420435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.432453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.432504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 [2024-09-28 08:50:28.444351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:50.534 [2024-09-28 08:50:28.444400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.534 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68153) - No such process 00:12:50.534 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 68153 00:12:50.534 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.534 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.534 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.534 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.534 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:50.534 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.534 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.534 delay0 00:12:50.534 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.534 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:50.534 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.534 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:50.534 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.534 08:50:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:12:50.793 [2024-09-28 08:50:28.712077] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:57.384 Initializing NVMe Controllers 00:12:57.384 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:57.384 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:57.384 Initialization complete. Launching workers. 00:12:57.384 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 359 00:12:57.384 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 646, failed to submit 33 00:12:57.384 success 521, unsuccessful 125, failed 0 00:12:57.384 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:57.384 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:57.384 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:12:57.384 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:57.384 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.384 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:57.384 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.384 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.384 rmmod nvme_tcp 00:12:57.384 rmmod nvme_fabrics 00:12:57.384 rmmod nvme_keyring 00:12:57.384 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.384 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:57.384 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:57.385 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 67985 ']' 00:12:57.385 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 67985 00:12:57.385 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 67985 ']' 00:12:57.385 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 67985 00:12:57.385 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:12:57.385 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:57.385 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67985 00:12:57.385 killing process with pid 67985 00:12:57.385 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:57.385 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:57.385 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67985' 00:12:57.385 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 67985 00:12:57.385 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 67985 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:12:58.323 00:12:58.323 real 0m28.419s 00:12:58.323 user 0m46.501s 00:12:58.323 sys 0m7.153s 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:58.323 ************************************ 00:12:58.323 END TEST nvmf_zcopy 00:12:58.323 ************************************ 00:12:58.323 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:58.583 ************************************ 00:12:58.583 START TEST nvmf_nmic 00:12:58.583 ************************************ 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:58.583 * Looking for test storage... 00:12:58.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:58.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.583 --rc genhtml_branch_coverage=1 00:12:58.583 --rc genhtml_function_coverage=1 00:12:58.583 --rc genhtml_legend=1 00:12:58.583 --rc geninfo_all_blocks=1 00:12:58.583 --rc geninfo_unexecuted_blocks=1 00:12:58.583 00:12:58.583 ' 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:58.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.583 --rc genhtml_branch_coverage=1 00:12:58.583 --rc genhtml_function_coverage=1 00:12:58.583 --rc genhtml_legend=1 00:12:58.583 --rc geninfo_all_blocks=1 00:12:58.583 --rc geninfo_unexecuted_blocks=1 00:12:58.583 00:12:58.583 ' 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:58.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.583 --rc genhtml_branch_coverage=1 00:12:58.583 --rc genhtml_function_coverage=1 00:12:58.583 --rc genhtml_legend=1 00:12:58.583 --rc geninfo_all_blocks=1 00:12:58.583 --rc geninfo_unexecuted_blocks=1 00:12:58.583 00:12:58.583 ' 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:58.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.583 --rc genhtml_branch_coverage=1 00:12:58.583 --rc genhtml_function_coverage=1 00:12:58.583 --rc genhtml_legend=1 00:12:58.583 --rc geninfo_all_blocks=1 00:12:58.583 --rc geninfo_unexecuted_blocks=1 00:12:58.583 00:12:58.583 ' 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:58.583 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.584 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.584 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:58.843 Cannot find device "nvmf_init_br" 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:58.843 Cannot find device "nvmf_init_br2" 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:58.843 Cannot find device "nvmf_tgt_br" 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:58.843 Cannot find device "nvmf_tgt_br2" 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:58.843 Cannot find device "nvmf_init_br" 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:58.843 Cannot find device "nvmf_init_br2" 00:12:58.843 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:58.844 Cannot find device "nvmf_tgt_br" 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:58.844 Cannot find device "nvmf_tgt_br2" 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:58.844 Cannot find device "nvmf_br" 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:58.844 Cannot find device "nvmf_init_if" 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:58.844 Cannot find device "nvmf_init_if2" 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:58.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:58.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:58.844 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:59.102 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:59.103 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:59.103 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:12:59.103 00:12:59.103 --- 10.0.0.3 ping statistics --- 00:12:59.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.103 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:59.103 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:59.103 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:12:59.103 00:12:59.103 --- 10.0.0.4 ping statistics --- 00:12:59.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.103 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:59.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:59.103 00:12:59.103 --- 10.0.0.1 ping statistics --- 00:12:59.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.103 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:59.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:12:59.103 00:12:59.103 --- 10.0.0.2 ping statistics --- 00:12:59.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.103 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=68556 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 68556 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 68556 ']' 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.103 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:59.103 [2024-09-28 08:50:37.084264] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:59.103 [2024-09-28 08:50:37.084441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.363 [2024-09-28 08:50:37.267613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.623 [2024-09-28 08:50:37.520488] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.623 [2024-09-28 08:50:37.520584] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.623 [2024-09-28 08:50:37.520609] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.623 [2024-09-28 08:50:37.520624] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.623 [2024-09-28 08:50:37.520640] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.623 [2024-09-28 08:50:37.520860] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.623 [2024-09-28 08:50:37.521223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.623 [2024-09-28 08:50:37.521984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.623 [2024-09-28 08:50:37.522088] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.883 [2024-09-28 08:50:37.726231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:00.452 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.452 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:13:00.452 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:00.452 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:00.452 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:00.452 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.452 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:00.452 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.452 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:00.452 [2024-09-28 08:50:38.200081] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:00.452 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.452 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:00.453 Malloc0 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:00.453 [2024-09-28 08:50:38.299913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:00.453 test case1: single bdev can't be used in multiple subsystems 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:00.453 [2024-09-28 08:50:38.323564] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:00.453 [2024-09-28 08:50:38.323627] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:00.453 [2024-09-28 08:50:38.323647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:00.453 request: 00:13:00.453 { 00:13:00.453 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:00.453 "namespace": { 00:13:00.453 "bdev_name": "Malloc0", 00:13:00.453 "no_auto_visible": false 00:13:00.453 }, 00:13:00.453 "method": "nvmf_subsystem_add_ns", 00:13:00.453 "req_id": 1 00:13:00.453 } 00:13:00.453 Got JSON-RPC error response 00:13:00.453 response: 00:13:00.453 { 00:13:00.453 "code": -32602, 00:13:00.453 "message": "Invalid parameters" 00:13:00.453 } 00:13:00.453 Adding namespace failed - expected result. 00:13:00.453 test case2: host connect to nvmf target in multiple paths 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:00.453 [2024-09-28 08:50:38.339791] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.453 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:00.712 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:13:00.712 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.712 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:00.712 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.712 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:00.712 08:50:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:03.249 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:03.249 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:03.249 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.249 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:03.249 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.249 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:03.249 08:50:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:03.249 [global] 00:13:03.249 thread=1 00:13:03.249 invalidate=1 00:13:03.249 rw=write 00:13:03.249 time_based=1 00:13:03.249 runtime=1 00:13:03.249 ioengine=libaio 00:13:03.249 direct=1 00:13:03.249 bs=4096 00:13:03.249 iodepth=1 00:13:03.249 norandommap=0 00:13:03.249 numjobs=1 00:13:03.249 00:13:03.249 verify_dump=1 00:13:03.249 verify_backlog=512 00:13:03.249 verify_state_save=0 00:13:03.249 do_verify=1 00:13:03.249 verify=crc32c-intel 00:13:03.249 [job0] 00:13:03.249 filename=/dev/nvme0n1 00:13:03.249 Could not set queue depth (nvme0n1) 00:13:03.249 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:03.249 fio-3.35 00:13:03.249 Starting 1 thread 00:13:04.189 00:13:04.189 job0: (groupid=0, jobs=1): err= 0: pid=68648: Sat Sep 28 08:50:41 2024 00:13:04.189 read: IOPS=2541, BW=9.93MiB/s (10.4MB/s)(9.94MiB/1001msec) 00:13:04.189 slat (nsec): min=11476, max=62559, avg=14207.95, stdev=4130.17 00:13:04.189 clat (usec): min=171, max=501, avg=216.87, stdev=23.17 00:13:04.189 lat (usec): min=185, max=513, avg=231.08, stdev=23.57 00:13:04.189 clat percentiles (usec): 00:13:04.189 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:13:04.189 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 221], 00:13:04.189 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 258], 00:13:04.189 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 412], 99.95th=[ 433], 00:13:04.189 | 99.99th=[ 502] 00:13:04.189 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:04.189 slat (usec): min=16, max=128, avg=21.37, stdev= 7.09 00:13:04.189 clat (usec): min=106, max=247, avg=136.39, stdev=18.49 00:13:04.189 lat (usec): min=126, max=376, avg=157.77, stdev=20.62 00:13:04.189 clat percentiles (usec): 00:13:04.189 | 1.00th=[ 111], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 122], 00:13:04.189 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 137], 00:13:04.189 | 70.00th=[ 143], 80.00th=[ 151], 90.00th=[ 163], 95.00th=[ 174], 00:13:04.189 | 99.00th=[ 188], 99.50th=[ 196], 99.90th=[ 225], 99.95th=[ 239], 00:13:04.189 | 99.99th=[ 247] 00:13:04.189 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:13:04.189 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:04.189 lat (usec) : 250=95.92%, 500=4.06%, 750=0.02% 00:13:04.189 cpu : usr=2.10%, sys=7.00%, ctx=5104, majf=0, minf=5 00:13:04.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:04.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.189 issued rwts: total=2544,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:04.189 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:04.189 00:13:04.189 Run status group 0 (all jobs): 00:13:04.189 READ: bw=9.93MiB/s (10.4MB/s), 9.93MiB/s-9.93MiB/s (10.4MB/s-10.4MB/s), io=9.94MiB (10.4MB), run=1001-1001msec 00:13:04.189 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:13:04.189 00:13:04.189 Disk stats (read/write): 00:13:04.189 nvme0n1: ios=2155/2560, merge=0/0, ticks=494/391, in_queue=885, util=91.58% 00:13:04.189 08:50:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:04.189 rmmod nvme_tcp 00:13:04.189 rmmod nvme_fabrics 00:13:04.189 rmmod nvme_keyring 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 68556 ']' 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 68556 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 68556 ']' 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 68556 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68556 00:13:04.189 killing process with pid 68556 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68556' 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 68556 00:13:04.189 08:50:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 68556 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:05.570 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:13:05.829 00:13:05.829 real 0m7.261s 00:13:05.829 user 0m21.303s 00:13:05.829 sys 0m2.475s 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:05.829 ************************************ 00:13:05.829 END TEST nvmf_nmic 00:13:05.829 ************************************ 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:05.829 ************************************ 00:13:05.829 START TEST nvmf_fio_target 00:13:05.829 ************************************ 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:05.829 * Looking for test storage... 00:13:05.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:13:05.829 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:06.089 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:06.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.089 --rc genhtml_branch_coverage=1 00:13:06.089 --rc genhtml_function_coverage=1 00:13:06.089 --rc genhtml_legend=1 00:13:06.089 --rc geninfo_all_blocks=1 00:13:06.089 --rc geninfo_unexecuted_blocks=1 00:13:06.089 00:13:06.090 ' 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:06.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.090 --rc genhtml_branch_coverage=1 00:13:06.090 --rc genhtml_function_coverage=1 00:13:06.090 --rc genhtml_legend=1 00:13:06.090 --rc geninfo_all_blocks=1 00:13:06.090 --rc geninfo_unexecuted_blocks=1 00:13:06.090 00:13:06.090 ' 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:06.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.090 --rc genhtml_branch_coverage=1 00:13:06.090 --rc genhtml_function_coverage=1 00:13:06.090 --rc genhtml_legend=1 00:13:06.090 --rc geninfo_all_blocks=1 00:13:06.090 --rc geninfo_unexecuted_blocks=1 00:13:06.090 00:13:06.090 ' 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:06.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:06.090 --rc genhtml_branch_coverage=1 00:13:06.090 --rc genhtml_function_coverage=1 00:13:06.090 --rc genhtml_legend=1 00:13:06.090 --rc geninfo_all_blocks=1 00:13:06.090 --rc geninfo_unexecuted_blocks=1 00:13:06.090 00:13:06.090 ' 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:06.090 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:06.090 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:06.091 Cannot find device "nvmf_init_br" 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:06.091 Cannot find device "nvmf_init_br2" 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:06.091 Cannot find device "nvmf_tgt_br" 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:06.091 Cannot find device "nvmf_tgt_br2" 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:06.091 Cannot find device "nvmf_init_br" 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:06.091 Cannot find device "nvmf_init_br2" 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:06.091 Cannot find device "nvmf_tgt_br" 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:06.091 Cannot find device "nvmf_tgt_br2" 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:06.091 Cannot find device "nvmf_br" 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:06.091 Cannot find device "nvmf_init_if" 00:13:06.091 08:50:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:13:06.091 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:06.091 Cannot find device "nvmf_init_if2" 00:13:06.091 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:13:06.091 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:06.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:06.091 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:13:06.091 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:06.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:06.091 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:13:06.091 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:06.091 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:06.091 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:06.091 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:06.091 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:06.350 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:06.350 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:06.350 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:06.350 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:06.350 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:06.351 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:06.351 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:13:06.351 00:13:06.351 --- 10.0.0.3 ping statistics --- 00:13:06.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.351 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:06.351 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:06.351 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:13:06.351 00:13:06.351 --- 10.0.0.4 ping statistics --- 00:13:06.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.351 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:06.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:06.351 00:13:06.351 --- 10.0.0.1 ping statistics --- 00:13:06.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.351 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:06.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:13:06.351 00:13:06.351 --- 10.0.0.2 ping statistics --- 00:13:06.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.351 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=68887 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 68887 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 68887 ']' 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.351 08:50:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.610 [2024-09-28 08:50:44.459714] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:06.610 [2024-09-28 08:50:44.459897] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.869 [2024-09-28 08:50:44.633512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.869 [2024-09-28 08:50:44.805086] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.869 [2024-09-28 08:50:44.805344] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.869 [2024-09-28 08:50:44.805501] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.869 [2024-09-28 08:50:44.805625] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.869 [2024-09-28 08:50:44.805679] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.869 [2024-09-28 08:50:44.805996] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.869 [2024-09-28 08:50:44.806144] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.869 [2024-09-28 08:50:44.806652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.869 [2024-09-28 08:50:44.806694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.129 [2024-09-28 08:50:44.981448] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:07.696 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:07.696 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:13:07.696 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:07.696 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:07.696 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.696 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.696 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:07.954 [2024-09-28 08:50:45.764137] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.954 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:08.214 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:08.214 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:08.473 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:08.473 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:09.043 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:09.043 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:09.302 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:09.302 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:09.562 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:09.821 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:09.821 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:10.079 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:10.079 08:50:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:10.338 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:10.338 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:10.596 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:11.164 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:11.164 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:11.164 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:11.164 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:11.424 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:11.683 [2024-09-28 08:50:49.600675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:11.683 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:11.941 08:50:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:12.207 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:13:12.485 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:12.485 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:12.485 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.485 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:12.485 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:12.485 08:50:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:14.388 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:14.388 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:14.388 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:14.388 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:14.388 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.388 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:14.388 08:50:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:14.388 [global] 00:13:14.388 thread=1 00:13:14.388 invalidate=1 00:13:14.388 rw=write 00:13:14.388 time_based=1 00:13:14.388 runtime=1 00:13:14.388 ioengine=libaio 00:13:14.388 direct=1 00:13:14.388 bs=4096 00:13:14.388 iodepth=1 00:13:14.388 norandommap=0 00:13:14.388 numjobs=1 00:13:14.388 00:13:14.388 verify_dump=1 00:13:14.388 verify_backlog=512 00:13:14.388 verify_state_save=0 00:13:14.388 do_verify=1 00:13:14.388 verify=crc32c-intel 00:13:14.388 [job0] 00:13:14.388 filename=/dev/nvme0n1 00:13:14.388 [job1] 00:13:14.388 filename=/dev/nvme0n2 00:13:14.388 [job2] 00:13:14.388 filename=/dev/nvme0n3 00:13:14.388 [job3] 00:13:14.388 filename=/dev/nvme0n4 00:13:14.647 Could not set queue depth (nvme0n1) 00:13:14.647 Could not set queue depth (nvme0n2) 00:13:14.647 Could not set queue depth (nvme0n3) 00:13:14.647 Could not set queue depth (nvme0n4) 00:13:14.647 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:14.647 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:14.647 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:14.647 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:14.647 fio-3.35 00:13:14.647 Starting 4 threads 00:13:16.025 00:13:16.025 job0: (groupid=0, jobs=1): err= 0: pid=69078: Sat Sep 28 08:50:53 2024 00:13:16.025 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:16.025 slat (nsec): min=9646, max=63090, avg=14741.88, stdev=5266.68 00:13:16.025 clat (usec): min=229, max=623, avg=320.52, stdev=41.43 00:13:16.025 lat (usec): min=248, max=634, avg=335.26, stdev=41.74 00:13:16.025 clat percentiles (usec): 00:13:16.025 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:13:16.025 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 322], 00:13:16.025 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 388], 00:13:16.025 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 603], 99.95th=[ 627], 00:13:16.025 | 99.99th=[ 627] 00:13:16.025 write: IOPS=1633, BW=6533KiB/s (6690kB/s)(6540KiB/1001msec); 0 zone resets 00:13:16.025 slat (usec): min=12, max=235, avg=26.78, stdev=21.89 00:13:16.025 clat (usec): min=137, max=1149, avg=266.42, stdev=66.53 00:13:16.025 lat (usec): min=178, max=1166, avg=293.20, stdev=76.23 00:13:16.025 clat percentiles (usec): 00:13:16.025 | 1.00th=[ 200], 5.00th=[ 215], 10.00th=[ 223], 20.00th=[ 233], 00:13:16.025 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 262], 00:13:16.025 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 392], 00:13:16.025 | 99.00th=[ 545], 99.50th=[ 603], 99.90th=[ 889], 99.95th=[ 1156], 00:13:16.025 | 99.99th=[ 1156] 00:13:16.025 bw ( KiB/s): min= 8192, max= 8192, per=32.33%, avg=8192.00, stdev= 0.00, samples=1 00:13:16.025 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:16.025 lat (usec) : 250=23.53%, 500=74.90%, 750=1.45%, 1000=0.09% 00:13:16.025 lat (msec) : 2=0.03% 00:13:16.025 cpu : usr=1.80%, sys=4.90%, ctx=3250, majf=0, minf=11 00:13:16.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.025 issued rwts: total=1536,1635,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.025 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:16.025 job1: (groupid=0, jobs=1): err= 0: pid=69079: Sat Sep 28 08:50:53 2024 00:13:16.025 read: IOPS=1532, BW=6130KiB/s (6277kB/s)(6136KiB/1001msec) 00:13:16.025 slat (usec): min=8, max=125, avg=18.09, stdev= 9.30 00:13:16.025 clat (usec): min=213, max=2952, avg=349.27, stdev=115.57 00:13:16.025 lat (usec): min=247, max=2978, avg=367.36, stdev=117.95 00:13:16.025 clat percentiles (usec): 00:13:16.025 | 1.00th=[ 273], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:13:16.025 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 334], 00:13:16.025 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 396], 95.00th=[ 502], 00:13:16.025 | 99.00th=[ 676], 99.50th=[ 857], 99.90th=[ 2073], 99.95th=[ 2966], 00:13:16.025 | 99.99th=[ 2966] 00:13:16.025 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:16.025 slat (usec): min=13, max=123, avg=24.21, stdev= 6.77 00:13:16.025 clat (usec): min=136, max=1001, avg=256.30, stdev=31.17 00:13:16.025 lat (usec): min=180, max=1015, avg=280.51, stdev=31.26 00:13:16.025 clat percentiles (usec): 00:13:16.025 | 1.00th=[ 204], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 235], 00:13:16.025 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:13:16.025 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:13:16.025 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 420], 99.95th=[ 1004], 00:13:16.025 | 99.99th=[ 1004] 00:13:16.025 bw ( KiB/s): min= 8192, max= 8192, per=32.33%, avg=8192.00, stdev= 0.00, samples=1 00:13:16.025 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:16.025 lat (usec) : 250=20.78%, 500=76.68%, 750=2.15%, 1000=0.20% 00:13:16.025 lat (msec) : 2=0.13%, 4=0.07% 00:13:16.025 cpu : usr=2.00%, sys=4.80%, ctx=3115, majf=0, minf=9 00:13:16.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.025 issued rwts: total=1534,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.025 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:16.025 job2: (groupid=0, jobs=1): err= 0: pid=69084: Sat Sep 28 08:50:53 2024 00:13:16.025 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:16.025 slat (nsec): min=9756, max=57136, avg=15995.64, stdev=5059.17 00:13:16.025 clat (usec): min=255, max=599, avg=319.47, stdev=42.22 00:13:16.025 lat (usec): min=271, max=611, avg=335.47, stdev=41.72 00:13:16.025 clat percentiles (usec): 00:13:16.025 | 1.00th=[ 265], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 289], 00:13:16.025 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 322], 00:13:16.025 | 70.00th=[ 334], 80.00th=[ 347], 90.00th=[ 367], 95.00th=[ 388], 00:13:16.025 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 562], 99.95th=[ 603], 00:13:16.025 | 99.99th=[ 603] 00:13:16.025 write: IOPS=1630, BW=6521KiB/s (6678kB/s)(6528KiB/1001msec); 0 zone resets 00:13:16.025 slat (usec): min=11, max=177, avg=25.77, stdev=16.37 00:13:16.025 clat (usec): min=156, max=1235, avg=267.63, stdev=73.44 00:13:16.025 lat (usec): min=193, max=1260, avg=293.40, stdev=78.96 00:13:16.025 clat percentiles (usec): 00:13:16.025 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 233], 00:13:16.025 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 260], 00:13:16.025 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 388], 00:13:16.025 | 99.00th=[ 578], 99.50th=[ 652], 99.90th=[ 1029], 99.95th=[ 1237], 00:13:16.025 | 99.99th=[ 1237] 00:13:16.025 bw ( KiB/s): min= 8192, max= 8192, per=32.33%, avg=8192.00, stdev= 0.00, samples=1 00:13:16.025 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:16.025 lat (usec) : 250=23.93%, 500=73.99%, 750=1.96%, 1000=0.06% 00:13:16.025 lat (msec) : 2=0.06% 00:13:16.025 cpu : usr=1.50%, sys=5.60%, ctx=3256, majf=0, minf=7 00:13:16.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.025 issued rwts: total=1536,1632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.025 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:16.025 job3: (groupid=0, jobs=1): err= 0: pid=69085: Sat Sep 28 08:50:53 2024 00:13:16.025 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:13:16.026 slat (usec): min=9, max=108, avg=16.91, stdev= 9.43 00:13:16.026 clat (usec): min=263, max=2940, avg=348.92, stdev=112.51 00:13:16.026 lat (usec): min=279, max=2967, avg=365.83, stdev=115.24 00:13:16.026 clat percentiles (usec): 00:13:16.026 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 306], 00:13:16.026 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:13:16.026 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 400], 95.00th=[ 494], 00:13:16.026 | 99.00th=[ 627], 99.50th=[ 693], 99.90th=[ 2057], 99.95th=[ 2933], 00:13:16.026 | 99.99th=[ 2933] 00:13:16.026 write: IOPS=1536, BW=6146KiB/s (6293kB/s)(6152KiB/1001msec); 0 zone resets 00:13:16.026 slat (usec): min=13, max=118, avg=22.44, stdev= 7.11 00:13:16.026 clat (usec): min=194, max=1102, avg=259.02, stdev=35.54 00:13:16.026 lat (usec): min=216, max=1128, avg=281.46, stdev=36.84 00:13:16.026 clat percentiles (usec): 00:13:16.026 | 1.00th=[ 206], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 239], 00:13:16.026 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 265], 00:13:16.026 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 302], 00:13:16.026 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 668], 99.95th=[ 1106], 00:13:16.026 | 99.99th=[ 1106] 00:13:16.026 bw ( KiB/s): min= 8192, max= 8192, per=32.33%, avg=8192.00, stdev= 0.00, samples=1 00:13:16.026 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:13:16.026 lat (usec) : 250=20.10%, 500=77.33%, 750=2.34%, 1000=0.07% 00:13:16.026 lat (msec) : 2=0.10%, 4=0.07% 00:13:16.026 cpu : usr=1.70%, sys=4.60%, ctx=3121, majf=0, minf=9 00:13:16.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:16.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:16.026 issued rwts: total=1536,1538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:16.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:16.026 00:13:16.026 Run status group 0 (all jobs): 00:13:16.026 READ: bw=24.0MiB/s (25.1MB/s), 6130KiB/s-6138KiB/s (6277kB/s-6285kB/s), io=24.0MiB (25.2MB), run=1001-1001msec 00:13:16.026 WRITE: bw=24.7MiB/s (25.9MB/s), 6138KiB/s-6533KiB/s (6285kB/s-6690kB/s), io=24.8MiB (26.0MB), run=1001-1001msec 00:13:16.026 00:13:16.026 Disk stats (read/write): 00:13:16.026 nvme0n1: ios=1337/1536, merge=0/0, ticks=425/371, in_queue=796, util=88.18% 00:13:16.026 nvme0n2: ios=1266/1536, merge=0/0, ticks=426/383, in_queue=809, util=88.02% 00:13:16.026 nvme0n3: ios=1292/1536, merge=0/0, ticks=405/371, in_queue=776, util=89.28% 00:13:16.026 nvme0n4: ios=1224/1536, merge=0/0, ticks=392/380, in_queue=772, util=89.74% 00:13:16.026 08:50:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:16.026 [global] 00:13:16.026 thread=1 00:13:16.026 invalidate=1 00:13:16.026 rw=randwrite 00:13:16.026 time_based=1 00:13:16.026 runtime=1 00:13:16.026 ioengine=libaio 00:13:16.026 direct=1 00:13:16.026 bs=4096 00:13:16.026 iodepth=1 00:13:16.026 norandommap=0 00:13:16.026 numjobs=1 00:13:16.026 00:13:16.026 verify_dump=1 00:13:16.026 verify_backlog=512 00:13:16.026 verify_state_save=0 00:13:16.026 do_verify=1 00:13:16.026 verify=crc32c-intel 00:13:16.026 [job0] 00:13:16.026 filename=/dev/nvme0n1 00:13:16.026 [job1] 00:13:16.026 filename=/dev/nvme0n2 00:13:16.026 [job2] 00:13:16.026 filename=/dev/nvme0n3 00:13:16.026 [job3] 00:13:16.026 filename=/dev/nvme0n4 00:13:16.026 Could not set queue depth (nvme0n1) 00:13:16.026 Could not set queue depth (nvme0n2) 00:13:16.026 Could not set queue depth (nvme0n3) 00:13:16.026 Could not set queue depth (nvme0n4) 00:13:16.026 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:16.026 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:16.026 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:16.026 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:16.026 fio-3.35 00:13:16.026 Starting 4 threads 00:13:17.402 00:13:17.402 job0: (groupid=0, jobs=1): err= 0: pid=69145: Sat Sep 28 08:50:55 2024 00:13:17.402 read: IOPS=1233, BW=4935KiB/s (5054kB/s)(4940KiB/1001msec) 00:13:17.402 slat (usec): min=10, max=111, avg=22.52, stdev= 7.86 00:13:17.402 clat (usec): min=206, max=889, avg=383.59, stdev=93.00 00:13:17.402 lat (usec): min=225, max=957, avg=406.10, stdev=94.90 00:13:17.402 clat percentiles (usec): 00:13:17.402 | 1.00th=[ 245], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 334], 00:13:17.402 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 359], 00:13:17.402 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 537], 95.00th=[ 611], 00:13:17.402 | 99.00th=[ 685], 99.50th=[ 725], 99.90th=[ 889], 99.95th=[ 889], 00:13:17.402 | 99.99th=[ 889] 00:13:17.402 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:17.402 slat (usec): min=16, max=107, avg=34.12, stdev= 8.68 00:13:17.402 clat (usec): min=146, max=579, avg=285.40, stdev=50.30 00:13:17.402 lat (usec): min=173, max=674, avg=319.51, stdev=54.30 00:13:17.402 clat percentiles (usec): 00:13:17.402 | 1.00th=[ 169], 5.00th=[ 245], 10.00th=[ 253], 20.00th=[ 262], 00:13:17.402 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:13:17.403 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 404], 00:13:17.403 | 99.00th=[ 494], 99.50th=[ 519], 99.90th=[ 578], 99.95th=[ 578], 00:13:17.403 | 99.99th=[ 578] 00:13:17.403 bw ( KiB/s): min= 7808, max= 7808, per=25.85%, avg=7808.00, stdev= 0.00, samples=1 00:13:17.403 iops : min= 1952, max= 1952, avg=1952.00, stdev= 0.00, samples=1 00:13:17.403 lat (usec) : 250=5.05%, 500=88.13%, 750=6.64%, 1000=0.18% 00:13:17.403 cpu : usr=1.60%, sys=6.50%, ctx=2800, majf=0, minf=9 00:13:17.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:17.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.403 issued rwts: total=1235,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:17.403 job1: (groupid=0, jobs=1): err= 0: pid=69146: Sat Sep 28 08:50:55 2024 00:13:17.403 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:13:17.403 slat (nsec): min=11917, max=93807, avg=15177.96, stdev=4659.04 00:13:17.403 clat (usec): min=179, max=766, avg=251.99, stdev=122.16 00:13:17.403 lat (usec): min=192, max=788, avg=267.17, stdev=124.27 00:13:17.403 clat percentiles (usec): 00:13:17.403 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 194], 00:13:17.403 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 212], 00:13:17.403 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 506], 95.00th=[ 594], 00:13:17.403 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 685], 99.95th=[ 701], 00:13:17.403 | 99.99th=[ 766] 00:13:17.403 write: IOPS=2270, BW=9083KiB/s (9301kB/s)(9092KiB/1001msec); 0 zone resets 00:13:17.403 slat (usec): min=7, max=171, avg=21.86, stdev=10.51 00:13:17.403 clat (usec): min=124, max=1980, avg=174.05, stdev=77.54 00:13:17.403 lat (usec): min=142, max=2003, avg=195.91, stdev=81.64 00:13:17.403 clat percentiles (usec): 00:13:17.403 | 1.00th=[ 128], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:13:17.403 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 161], 00:13:17.403 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 215], 95.00th=[ 314], 00:13:17.403 | 99.00th=[ 537], 99.50th=[ 578], 99.90th=[ 611], 99.95th=[ 627], 00:13:17.403 | 99.99th=[ 1975] 00:13:17.403 bw ( KiB/s): min=12184, max=12184, per=40.34%, avg=12184.00, stdev= 0.00, samples=1 00:13:17.403 iops : min= 3046, max= 3046, avg=3046.00, stdev= 0.00, samples=1 00:13:17.403 lat (usec) : 250=89.17%, 500=5.32%, 750=5.46%, 1000=0.02% 00:13:17.403 lat (msec) : 2=0.02% 00:13:17.403 cpu : usr=2.20%, sys=5.90%, ctx=4378, majf=0, minf=15 00:13:17.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:17.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.403 issued rwts: total=2048,2273,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:17.403 job2: (groupid=0, jobs=1): err= 0: pid=69147: Sat Sep 28 08:50:55 2024 00:13:17.403 read: IOPS=1209, BW=4839KiB/s (4955kB/s)(4844KiB/1001msec) 00:13:17.403 slat (nsec): min=16728, max=83877, avg=25756.18, stdev=9083.66 00:13:17.403 clat (usec): min=223, max=3332, avg=396.85, stdev=155.22 00:13:17.403 lat (usec): min=244, max=3353, avg=422.61, stdev=160.65 00:13:17.403 clat percentiles (usec): 00:13:17.403 | 1.00th=[ 302], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 334], 00:13:17.403 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 359], 00:13:17.403 | 70.00th=[ 367], 80.00th=[ 383], 90.00th=[ 529], 95.00th=[ 685], 00:13:17.403 | 99.00th=[ 979], 99.50th=[ 1004], 99.90th=[ 1205], 99.95th=[ 3326], 00:13:17.403 | 99.99th=[ 3326] 00:13:17.403 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:13:17.403 slat (nsec): min=23640, max=81204, avg=36467.30, stdev=6935.65 00:13:17.403 clat (usec): min=151, max=517, avg=275.41, stdev=46.81 00:13:17.403 lat (usec): min=190, max=598, avg=311.88, stdev=47.21 00:13:17.403 clat percentiles (usec): 00:13:17.403 | 1.00th=[ 174], 5.00th=[ 206], 10.00th=[ 233], 20.00th=[ 249], 00:13:17.403 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:13:17.403 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 322], 95.00th=[ 375], 00:13:17.403 | 99.00th=[ 457], 99.50th=[ 478], 99.90th=[ 498], 99.95th=[ 519], 00:13:17.403 | 99.99th=[ 519] 00:13:17.403 bw ( KiB/s): min= 8016, max= 8016, per=26.54%, avg=8016.00, stdev= 0.00, samples=1 00:13:17.403 iops : min= 2004, max= 2004, avg=2004.00, stdev= 0.00, samples=1 00:13:17.403 lat (usec) : 250=11.83%, 500=82.27%, 750=4.15%, 1000=1.46% 00:13:17.403 lat (msec) : 2=0.25%, 4=0.04% 00:13:17.403 cpu : usr=2.50%, sys=6.60%, ctx=2748, majf=0, minf=15 00:13:17.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:17.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.403 issued rwts: total=1211,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:17.403 job3: (groupid=0, jobs=1): err= 0: pid=69148: Sat Sep 28 08:50:55 2024 00:13:17.403 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:13:17.403 slat (nsec): min=11759, max=68209, avg=15947.20, stdev=5183.70 00:13:17.403 clat (usec): min=184, max=7366, avg=253.22, stdev=311.45 00:13:17.403 lat (usec): min=199, max=7388, avg=269.17, stdev=312.11 00:13:17.403 clat percentiles (usec): 00:13:17.403 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 204], 00:13:17.403 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:13:17.403 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 498], 00:13:17.403 | 99.00th=[ 594], 99.50th=[ 627], 99.90th=[ 7111], 99.95th=[ 7177], 00:13:17.403 | 99.99th=[ 7373] 00:13:17.403 write: IOPS=2210, BW=8843KiB/s (9055kB/s)(8852KiB/1001msec); 0 zone resets 00:13:17.403 slat (usec): min=11, max=202, avg=23.74, stdev=12.91 00:13:17.403 clat (usec): min=130, max=709, avg=175.42, stdev=61.96 00:13:17.403 lat (usec): min=151, max=767, avg=199.15, stdev=68.76 00:13:17.403 clat percentiles (usec): 00:13:17.403 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 147], 00:13:17.403 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 161], 60.00th=[ 165], 00:13:17.403 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 198], 95.00th=[ 302], 00:13:17.403 | 99.00th=[ 502], 99.50th=[ 529], 99.90th=[ 660], 99.95th=[ 685], 00:13:17.403 | 99.99th=[ 709] 00:13:17.403 bw ( KiB/s): min=11048, max=11048, per=36.58%, avg=11048.00, stdev= 0.00, samples=1 00:13:17.403 iops : min= 2762, max= 2762, avg=2762.00, stdev= 0.00, samples=1 00:13:17.403 lat (usec) : 250=91.03%, 500=6.08%, 750=2.75% 00:13:17.403 lat (msec) : 4=0.05%, 10=0.09% 00:13:17.403 cpu : usr=1.20%, sys=7.30%, ctx=4317, majf=0, minf=10 00:13:17.403 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:17.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.403 issued rwts: total=2048,2213,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.403 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:17.403 00:13:17.403 Run status group 0 (all jobs): 00:13:17.403 READ: bw=25.5MiB/s (26.8MB/s), 4839KiB/s-8184KiB/s (4955kB/s-8380kB/s), io=25.6MiB (26.8MB), run=1001-1001msec 00:13:17.403 WRITE: bw=29.5MiB/s (30.9MB/s), 6138KiB/s-9083KiB/s (6285kB/s-9301kB/s), io=29.5MiB (31.0MB), run=1001-1001msec 00:13:17.403 00:13:17.403 Disk stats (read/write): 00:13:17.403 nvme0n1: ios=1074/1513, merge=0/0, ticks=407/454, in_queue=861, util=89.08% 00:13:17.403 nvme0n2: ios=2005/2048, merge=0/0, ticks=492/332, in_queue=824, util=88.78% 00:13:17.403 nvme0n3: ios=1051/1517, merge=0/0, ticks=423/446, in_queue=869, util=90.00% 00:13:17.403 nvme0n4: ios=1928/2048, merge=0/0, ticks=441/355, in_queue=796, util=88.59% 00:13:17.403 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:17.403 [global] 00:13:17.403 thread=1 00:13:17.403 invalidate=1 00:13:17.403 rw=write 00:13:17.403 time_based=1 00:13:17.403 runtime=1 00:13:17.403 ioengine=libaio 00:13:17.403 direct=1 00:13:17.403 bs=4096 00:13:17.403 iodepth=128 00:13:17.403 norandommap=0 00:13:17.403 numjobs=1 00:13:17.403 00:13:17.403 verify_dump=1 00:13:17.403 verify_backlog=512 00:13:17.403 verify_state_save=0 00:13:17.403 do_verify=1 00:13:17.403 verify=crc32c-intel 00:13:17.403 [job0] 00:13:17.403 filename=/dev/nvme0n1 00:13:17.403 [job1] 00:13:17.403 filename=/dev/nvme0n2 00:13:17.403 [job2] 00:13:17.403 filename=/dev/nvme0n3 00:13:17.403 [job3] 00:13:17.403 filename=/dev/nvme0n4 00:13:17.403 Could not set queue depth (nvme0n1) 00:13:17.403 Could not set queue depth (nvme0n2) 00:13:17.403 Could not set queue depth (nvme0n3) 00:13:17.403 Could not set queue depth (nvme0n4) 00:13:17.403 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:17.403 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:17.403 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:17.403 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:17.403 fio-3.35 00:13:17.403 Starting 4 threads 00:13:18.781 00:13:18.782 job0: (groupid=0, jobs=1): err= 0: pid=69203: Sat Sep 28 08:50:56 2024 00:13:18.782 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:13:18.782 slat (usec): min=7, max=10936, avg=206.98, stdev=1128.69 00:13:18.782 clat (usec): min=12801, max=45415, avg=26468.32, stdev=8044.49 00:13:18.782 lat (usec): min=15528, max=45433, avg=26675.30, stdev=8032.16 00:13:18.782 clat percentiles (usec): 00:13:18.782 | 1.00th=[15533], 5.00th=[16909], 10.00th=[17171], 20.00th=[17957], 00:13:18.782 | 30.00th=[19792], 40.00th=[23987], 50.00th=[27395], 60.00th=[28443], 00:13:18.782 | 70.00th=[29230], 80.00th=[31589], 90.00th=[39060], 95.00th=[43254], 00:13:18.782 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:13:18.782 | 99.99th=[45351] 00:13:18.782 write: IOPS=2967, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1003msec); 0 zone resets 00:13:18.782 slat (usec): min=13, max=9993, avg=149.48, stdev=737.39 00:13:18.782 clat (usec): min=2592, max=38850, avg=19678.87, stdev=5667.32 00:13:18.782 lat (usec): min=2611, max=38894, avg=19828.35, stdev=5646.85 00:13:18.782 clat percentiles (usec): 00:13:18.782 | 1.00th=[ 3261], 5.00th=[13698], 10.00th=[14091], 20.00th=[14484], 00:13:18.782 | 30.00th=[16057], 40.00th=[18482], 50.00th=[19006], 60.00th=[19792], 00:13:18.782 | 70.00th=[22152], 80.00th=[25560], 90.00th=[27395], 95.00th=[28181], 00:13:18.782 | 99.00th=[38536], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:13:18.782 | 99.99th=[39060] 00:13:18.782 bw ( KiB/s): min=10504, max=12312, per=25.32%, avg=11408.00, stdev=1278.45, samples=2 00:13:18.782 iops : min= 2626, max= 3078, avg=2852.00, stdev=319.61, samples=2 00:13:18.782 lat (msec) : 4=0.58%, 10=0.81%, 20=45.34%, 50=53.27% 00:13:18.782 cpu : usr=3.49%, sys=8.18%, ctx=173, majf=0, minf=11 00:13:18.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:18.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:18.782 issued rwts: total=2560,2976,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:18.782 job1: (groupid=0, jobs=1): err= 0: pid=69204: Sat Sep 28 08:50:56 2024 00:13:18.782 read: IOPS=2154, BW=8619KiB/s (8826kB/s)(8636KiB/1002msec) 00:13:18.782 slat (usec): min=5, max=6386, avg=197.05, stdev=846.47 00:13:18.782 clat (usec): min=1368, max=43184, avg=24488.54, stdev=5590.04 00:13:18.782 lat (usec): min=6126, max=46763, avg=24685.58, stdev=5641.04 00:13:18.782 clat percentiles (usec): 00:13:18.782 | 1.00th=[ 6521], 5.00th=[17171], 10.00th=[19530], 20.00th=[20317], 00:13:18.782 | 30.00th=[21627], 40.00th=[21890], 50.00th=[23462], 60.00th=[25297], 00:13:18.782 | 70.00th=[27919], 80.00th=[30278], 90.00th=[31065], 95.00th=[32113], 00:13:18.782 | 99.00th=[39584], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:13:18.782 | 99.99th=[43254] 00:13:18.782 write: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:13:18.782 slat (usec): min=16, max=8610, avg=215.39, stdev=836.76 00:13:18.782 clat (usec): min=12384, max=62915, avg=28554.55, stdev=13877.40 00:13:18.782 lat (usec): min=12415, max=62940, avg=28769.94, stdev=13976.68 00:13:18.782 clat percentiles (usec): 00:13:18.782 | 1.00th=[12911], 5.00th=[13960], 10.00th=[14615], 20.00th=[16581], 00:13:18.782 | 30.00th=[17695], 40.00th=[20317], 50.00th=[21103], 60.00th=[28443], 00:13:18.782 | 70.00th=[36963], 80.00th=[41681], 90.00th=[50070], 95.00th=[56361], 00:13:18.782 | 99.00th=[62129], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:13:18.782 | 99.99th=[63177] 00:13:18.782 bw ( KiB/s): min= 9448, max=10917, per=22.60%, avg=10182.50, stdev=1038.74, samples=2 00:13:18.782 iops : min= 2362, max= 2729, avg=2545.50, stdev=259.51, samples=2 00:13:18.782 lat (msec) : 2=0.02%, 10=0.78%, 20=27.36%, 50=66.41%, 100=5.42% 00:13:18.782 cpu : usr=2.70%, sys=8.39%, ctx=268, majf=0, minf=10 00:13:18.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:18.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:18.782 issued rwts: total=2159,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:18.782 job2: (groupid=0, jobs=1): err= 0: pid=69205: Sat Sep 28 08:50:56 2024 00:13:18.782 read: IOPS=2744, BW=10.7MiB/s (11.2MB/s)(10.8MiB/1003msec) 00:13:18.782 slat (usec): min=6, max=10542, avg=189.00, stdev=1011.05 00:13:18.782 clat (usec): min=712, max=43948, avg=24226.55, stdev=7468.68 00:13:18.782 lat (usec): min=5961, max=43965, avg=24415.55, stdev=7452.53 00:13:18.782 clat percentiles (usec): 00:13:18.782 | 1.00th=[ 6587], 5.00th=[16712], 10.00th=[17957], 20.00th=[18482], 00:13:18.782 | 30.00th=[19006], 40.00th=[19530], 50.00th=[21627], 60.00th=[25297], 00:13:18.782 | 70.00th=[27657], 80.00th=[28967], 90.00th=[36963], 95.00th=[40633], 00:13:18.782 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:13:18.782 | 99.99th=[43779] 00:13:18.782 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:13:18.782 slat (usec): min=13, max=9087, avg=148.07, stdev=717.02 00:13:18.782 clat (usec): min=11858, max=32774, avg=19386.37, stdev=4199.03 00:13:18.782 lat (usec): min=14461, max=32796, avg=19534.44, stdev=4163.83 00:13:18.782 clat percentiles (usec): 00:13:18.782 | 1.00th=[12911], 5.00th=[14746], 10.00th=[15139], 20.00th=[15533], 00:13:18.782 | 30.00th=[15926], 40.00th=[17171], 50.00th=[19006], 60.00th=[20055], 00:13:18.782 | 70.00th=[20579], 80.00th=[22414], 90.00th=[26870], 95.00th=[27132], 00:13:18.782 | 99.00th=[32637], 99.50th=[32637], 99.90th=[32637], 99.95th=[32637], 00:13:18.782 | 99.99th=[32900] 00:13:18.782 bw ( KiB/s): min=12288, max=12288, per=27.28%, avg=12288.00, stdev= 0.00, samples=2 00:13:18.782 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:13:18.782 lat (usec) : 750=0.02% 00:13:18.782 lat (msec) : 10=0.55%, 20=51.76%, 50=47.67% 00:13:18.782 cpu : usr=3.09%, sys=9.28%, ctx=185, majf=0, minf=9 00:13:18.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:13:18.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:18.782 issued rwts: total=2753,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:18.782 job3: (groupid=0, jobs=1): err= 0: pid=69206: Sat Sep 28 08:50:56 2024 00:13:18.782 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:13:18.782 slat (usec): min=5, max=7312, avg=186.30, stdev=806.07 00:13:18.782 clat (usec): min=15652, max=40142, avg=23982.10, stdev=3997.86 00:13:18.782 lat (usec): min=15675, max=40471, avg=24168.41, stdev=4066.71 00:13:18.782 clat percentiles (usec): 00:13:18.782 | 1.00th=[16319], 5.00th=[19006], 10.00th=[20055], 20.00th=[21103], 00:13:18.782 | 30.00th=[21365], 40.00th=[21890], 50.00th=[22152], 60.00th=[23987], 00:13:18.782 | 70.00th=[26084], 80.00th=[28181], 90.00th=[29754], 95.00th=[30278], 00:13:18.782 | 99.00th=[33817], 99.50th=[36439], 99.90th=[40109], 99.95th=[40109], 00:13:18.782 | 99.99th=[40109] 00:13:18.782 write: IOPS=2705, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1006msec); 0 zone resets 00:13:18.782 slat (usec): min=13, max=8765, avg=184.37, stdev=792.99 00:13:18.782 clat (usec): min=3518, max=57538, avg=23925.48, stdev=10358.98 00:13:18.782 lat (usec): min=6770, max=57563, avg=24109.85, stdev=10429.73 00:13:18.782 clat percentiles (usec): 00:13:18.782 | 1.00th=[12911], 5.00th=[15270], 10.00th=[15664], 20.00th=[16909], 00:13:18.782 | 30.00th=[17695], 40.00th=[18482], 50.00th=[19792], 60.00th=[20579], 00:13:18.782 | 70.00th=[23200], 80.00th=[32113], 90.00th=[41157], 95.00th=[49021], 00:13:18.782 | 99.00th=[54789], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:13:18.782 | 99.99th=[57410] 00:13:18.782 bw ( KiB/s): min= 9664, max=11110, per=23.06%, avg=10387.00, stdev=1022.48, samples=2 00:13:18.782 iops : min= 2416, max= 2777, avg=2596.50, stdev=255.27, samples=2 00:13:18.782 lat (msec) : 4=0.02%, 10=0.15%, 20=30.58%, 50=67.00%, 100=2.25% 00:13:18.782 cpu : usr=2.19%, sys=8.96%, ctx=252, majf=0, minf=3 00:13:18.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:18.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:18.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:18.782 issued rwts: total=2560,2722,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:18.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:18.782 00:13:18.782 Run status group 0 (all jobs): 00:13:18.782 READ: bw=39.0MiB/s (40.8MB/s), 8619KiB/s-10.7MiB/s (8826kB/s-11.2MB/s), io=39.2MiB (41.1MB), run=1002-1006msec 00:13:18.782 WRITE: bw=44.0MiB/s (46.1MB/s), 9.98MiB/s-12.0MiB/s (10.5MB/s-12.5MB/s), io=44.3MiB (46.4MB), run=1002-1006msec 00:13:18.782 00:13:18.782 Disk stats (read/write): 00:13:18.782 nvme0n1: ios=2162/2560, merge=0/0, ticks=13673/11502, in_queue=25175, util=88.38% 00:13:18.782 nvme0n2: ios=2097/2239, merge=0/0, ticks=16609/17265, in_queue=33874, util=89.07% 00:13:18.782 nvme0n3: ios=2272/2560, merge=0/0, ticks=14004/11251, in_queue=25255, util=89.26% 00:13:18.782 nvme0n4: ios=2048/2550, merge=0/0, ticks=16488/17563, in_queue=34051, util=89.72% 00:13:18.782 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:18.782 [global] 00:13:18.782 thread=1 00:13:18.782 invalidate=1 00:13:18.782 rw=randwrite 00:13:18.782 time_based=1 00:13:18.782 runtime=1 00:13:18.782 ioengine=libaio 00:13:18.782 direct=1 00:13:18.782 bs=4096 00:13:18.782 iodepth=128 00:13:18.782 norandommap=0 00:13:18.782 numjobs=1 00:13:18.782 00:13:18.782 verify_dump=1 00:13:18.782 verify_backlog=512 00:13:18.782 verify_state_save=0 00:13:18.782 do_verify=1 00:13:18.782 verify=crc32c-intel 00:13:18.782 [job0] 00:13:18.782 filename=/dev/nvme0n1 00:13:18.782 [job1] 00:13:18.782 filename=/dev/nvme0n2 00:13:18.782 [job2] 00:13:18.782 filename=/dev/nvme0n3 00:13:18.782 [job3] 00:13:18.782 filename=/dev/nvme0n4 00:13:18.782 Could not set queue depth (nvme0n1) 00:13:18.782 Could not set queue depth (nvme0n2) 00:13:18.782 Could not set queue depth (nvme0n3) 00:13:18.782 Could not set queue depth (nvme0n4) 00:13:18.783 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:18.783 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:18.783 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:18.783 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:18.783 fio-3.35 00:13:18.783 Starting 4 threads 00:13:20.161 00:13:20.161 job0: (groupid=0, jobs=1): err= 0: pid=69259: Sat Sep 28 08:50:57 2024 00:13:20.161 read: IOPS=2376, BW=9507KiB/s (9736kB/s)(9536KiB/1003msec) 00:13:20.161 slat (usec): min=7, max=9669, avg=208.97, stdev=765.95 00:13:20.161 clat (usec): min=1791, max=35033, avg=25922.94, stdev=4217.79 00:13:20.161 lat (usec): min=4439, max=35055, avg=26131.90, stdev=4210.36 00:13:20.161 clat percentiles (usec): 00:13:20.162 | 1.00th=[ 8586], 5.00th=[19268], 10.00th=[22152], 20.00th=[24249], 00:13:20.162 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26084], 60.00th=[26608], 00:13:20.162 | 70.00th=[27395], 80.00th=[28181], 90.00th=[30540], 95.00th=[32375], 00:13:20.162 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:13:20.162 | 99.99th=[34866] 00:13:20.162 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:13:20.162 slat (usec): min=5, max=9144, avg=187.59, stdev=774.84 00:13:20.162 clat (usec): min=14359, max=34047, avg=25479.22, stdev=4021.65 00:13:20.162 lat (usec): min=15165, max=34190, avg=25666.82, stdev=4026.79 00:13:20.162 clat percentiles (usec): 00:13:20.162 | 1.00th=[15401], 5.00th=[17957], 10.00th=[19530], 20.00th=[22152], 00:13:20.162 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25560], 60.00th=[27132], 00:13:20.162 | 70.00th=[28181], 80.00th=[28705], 90.00th=[30016], 95.00th=[31589], 00:13:20.162 | 99.00th=[33162], 99.50th=[33424], 99.90th=[33817], 99.95th=[33817], 00:13:20.162 | 99.99th=[33817] 00:13:20.162 bw ( KiB/s): min= 8944, max=11559, per=18.04%, avg=10251.50, stdev=1849.08, samples=2 00:13:20.162 iops : min= 2236, max= 2889, avg=2562.50, stdev=461.74, samples=2 00:13:20.162 lat (msec) : 2=0.02%, 10=0.83%, 20=8.90%, 50=90.25% 00:13:20.162 cpu : usr=2.99%, sys=7.09%, ctx=669, majf=0, minf=13 00:13:20.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:13:20.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:20.162 issued rwts: total=2384,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:20.162 job1: (groupid=0, jobs=1): err= 0: pid=69260: Sat Sep 28 08:50:57 2024 00:13:20.162 read: IOPS=2528, BW=9.88MiB/s (10.4MB/s)(9.91MiB/1003msec) 00:13:20.162 slat (usec): min=6, max=13944, avg=212.24, stdev=832.11 00:13:20.162 clat (usec): min=1680, max=40177, avg=26355.66, stdev=4429.10 00:13:20.162 lat (usec): min=5106, max=40317, avg=26567.89, stdev=4442.62 00:13:20.162 clat percentiles (usec): 00:13:20.162 | 1.00th=[ 8160], 5.00th=[19792], 10.00th=[22414], 20.00th=[24249], 00:13:20.162 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26084], 60.00th=[26870], 00:13:20.162 | 70.00th=[27919], 80.00th=[28967], 90.00th=[31851], 95.00th=[32637], 00:13:20.162 | 99.00th=[35914], 99.50th=[36963], 99.90th=[38011], 99.95th=[39060], 00:13:20.162 | 99.99th=[40109] 00:13:20.162 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:13:20.162 slat (usec): min=5, max=8994, avg=172.21, stdev=718.16 00:13:20.162 clat (usec): min=7390, max=34623, avg=23529.02, stdev=5256.85 00:13:20.162 lat (usec): min=10896, max=34641, avg=23701.22, stdev=5273.92 00:13:20.162 clat percentiles (usec): 00:13:20.162 | 1.00th=[11994], 5.00th=[12256], 10.00th=[16581], 20.00th=[18744], 00:13:20.162 | 30.00th=[21103], 40.00th=[23200], 50.00th=[24773], 60.00th=[25297], 00:13:20.162 | 70.00th=[26870], 80.00th=[28443], 90.00th=[30016], 95.00th=[30802], 00:13:20.162 | 99.00th=[32113], 99.50th=[32900], 99.90th=[33424], 99.95th=[33817], 00:13:20.162 | 99.99th=[34866] 00:13:20.162 bw ( KiB/s): min= 8200, max=12304, per=18.04%, avg=10252.00, stdev=2901.97, samples=2 00:13:20.162 iops : min= 2050, max= 3076, avg=2563.00, stdev=725.49, samples=2 00:13:20.162 lat (msec) : 2=0.02%, 10=0.88%, 20=13.19%, 50=85.91% 00:13:20.162 cpu : usr=3.19%, sys=7.09%, ctx=686, majf=0, minf=8 00:13:20.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:20.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:20.162 issued rwts: total=2536,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:20.162 job2: (groupid=0, jobs=1): err= 0: pid=69261: Sat Sep 28 08:50:57 2024 00:13:20.162 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:13:20.162 slat (usec): min=8, max=6135, avg=117.23, stdev=577.63 00:13:20.162 clat (usec): min=8817, max=21333, avg=14947.42, stdev=1373.21 00:13:20.162 lat (usec): min=8858, max=21516, avg=15064.65, stdev=1434.94 00:13:20.162 clat percentiles (usec): 00:13:20.162 | 1.00th=[11469], 5.00th=[13173], 10.00th=[13566], 20.00th=[14091], 00:13:20.162 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14877], 60.00th=[15008], 00:13:20.162 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16319], 95.00th=[17171], 00:13:20.162 | 99.00th=[19530], 99.50th=[20579], 99.90th=[21103], 99.95th=[21365], 00:13:20.162 | 99.99th=[21365] 00:13:20.162 write: IOPS=4529, BW=17.7MiB/s (18.6MB/s)(17.7MiB/1001msec); 0 zone resets 00:13:20.162 slat (usec): min=10, max=7845, avg=106.96, stdev=617.58 00:13:20.162 clat (usec): min=508, max=27769, avg=14119.29, stdev=2010.40 00:13:20.162 lat (usec): min=4855, max=27797, avg=14226.25, stdev=2086.82 00:13:20.162 clat percentiles (usec): 00:13:20.162 | 1.00th=[ 6063], 5.00th=[11731], 10.00th=[12780], 20.00th=[13173], 00:13:20.162 | 30.00th=[13566], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:13:20.162 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15926], 95.00th=[17433], 00:13:20.162 | 99.00th=[21103], 99.50th=[21365], 99.90th=[23200], 99.95th=[27657], 00:13:20.162 | 99.99th=[27657] 00:13:20.162 bw ( KiB/s): min=17048, max=17048, per=30.00%, avg=17048.00, stdev= 0.00, samples=1 00:13:20.162 iops : min= 4262, max= 4262, avg=4262.00, stdev= 0.00, samples=1 00:13:20.162 lat (usec) : 750=0.01% 00:13:20.162 lat (msec) : 10=1.91%, 20=96.67%, 50=1.40% 00:13:20.162 cpu : usr=3.80%, sys=13.00%, ctx=304, majf=0, minf=14 00:13:20.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:20.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:20.162 issued rwts: total=4096,4534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:20.162 job3: (groupid=0, jobs=1): err= 0: pid=69262: Sat Sep 28 08:50:57 2024 00:13:20.162 read: IOPS=4206, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1004msec) 00:13:20.162 slat (usec): min=7, max=8133, avg=108.73, stdev=696.34 00:13:20.162 clat (usec): min=1887, max=24182, avg=15042.06, stdev=1788.79 00:13:20.162 lat (usec): min=8912, max=29211, avg=15150.79, stdev=1813.68 00:13:20.162 clat percentiles (usec): 00:13:20.162 | 1.00th=[ 9503], 5.00th=[10945], 10.00th=[13960], 20.00th=[14353], 00:13:20.162 | 30.00th=[14615], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:13:20.162 | 70.00th=[15664], 80.00th=[15795], 90.00th=[16319], 95.00th=[16909], 00:13:20.162 | 99.00th=[23200], 99.50th=[23200], 99.90th=[24249], 99.95th=[24249], 00:13:20.162 | 99.99th=[24249] 00:13:20.162 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:13:20.162 slat (usec): min=4, max=11764, avg=109.85, stdev=667.09 00:13:20.162 clat (usec): min=7179, max=20701, avg=13827.44, stdev=1445.28 00:13:20.162 lat (usec): min=9443, max=20726, avg=13937.29, stdev=1320.82 00:13:20.162 clat percentiles (usec): 00:13:20.162 | 1.00th=[ 8979], 5.00th=[11994], 10.00th=[12518], 20.00th=[13042], 00:13:20.162 | 30.00th=[13304], 40.00th=[13698], 50.00th=[13829], 60.00th=[13960], 00:13:20.162 | 70.00th=[14222], 80.00th=[14353], 90.00th=[15139], 95.00th=[15926], 00:13:20.162 | 99.00th=[20055], 99.50th=[20317], 99.90th=[20579], 99.95th=[20579], 00:13:20.162 | 99.99th=[20579] 00:13:20.162 bw ( KiB/s): min=17912, max=18952, per=32.44%, avg=18432.00, stdev=735.39, samples=2 00:13:20.162 iops : min= 4478, max= 4738, avg=4608.00, stdev=183.85, samples=2 00:13:20.162 lat (msec) : 2=0.01%, 10=1.97%, 20=96.66%, 50=1.36% 00:13:20.162 cpu : usr=3.69%, sys=13.16%, ctx=179, majf=0, minf=7 00:13:20.162 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:20.162 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.162 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:20.162 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.162 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:20.162 00:13:20.162 Run status group 0 (all jobs): 00:13:20.162 READ: bw=51.5MiB/s (54.0MB/s), 9507KiB/s-16.4MiB/s (9736kB/s-17.2MB/s), io=51.7MiB (54.2MB), run=1001-1004msec 00:13:20.162 WRITE: bw=55.5MiB/s (58.2MB/s), 9.97MiB/s-17.9MiB/s (10.5MB/s-18.8MB/s), io=55.7MiB (58.4MB), run=1001-1004msec 00:13:20.162 00:13:20.162 Disk stats (read/write): 00:13:20.162 nvme0n1: ios=2098/2148, merge=0/0, ticks=19537/20635, in_queue=40172, util=87.56% 00:13:20.162 nvme0n2: ios=2097/2357, merge=0/0, ticks=20014/20973, in_queue=40987, util=89.07% 00:13:20.162 nvme0n3: ios=3584/3714, merge=0/0, ticks=25503/22933, in_queue=48436, util=88.10% 00:13:20.162 nvme0n4: ios=3584/3904, merge=0/0, ticks=51684/49786, in_queue=101470, util=89.79% 00:13:20.162 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:20.162 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=69282 00:13:20.162 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:20.162 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:20.162 [global] 00:13:20.162 thread=1 00:13:20.162 invalidate=1 00:13:20.162 rw=read 00:13:20.162 time_based=1 00:13:20.162 runtime=10 00:13:20.162 ioengine=libaio 00:13:20.162 direct=1 00:13:20.162 bs=4096 00:13:20.162 iodepth=1 00:13:20.162 norandommap=1 00:13:20.162 numjobs=1 00:13:20.162 00:13:20.162 [job0] 00:13:20.162 filename=/dev/nvme0n1 00:13:20.162 [job1] 00:13:20.162 filename=/dev/nvme0n2 00:13:20.162 [job2] 00:13:20.162 filename=/dev/nvme0n3 00:13:20.162 [job3] 00:13:20.162 filename=/dev/nvme0n4 00:13:20.162 Could not set queue depth (nvme0n1) 00:13:20.162 Could not set queue depth (nvme0n2) 00:13:20.162 Could not set queue depth (nvme0n3) 00:13:20.162 Could not set queue depth (nvme0n4) 00:13:20.162 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.162 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.162 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.162 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:20.162 fio-3.35 00:13:20.162 Starting 4 threads 00:13:23.450 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:23.451 fio: pid=69325, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:23.451 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=44797952, buflen=4096 00:13:23.451 08:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:23.709 fio: pid=69324, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:23.709 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44015616, buflen=4096 00:13:23.709 08:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:23.709 08:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:23.968 fio: pid=69322, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:23.968 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=43237376, buflen=4096 00:13:23.968 08:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:23.968 08:51:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:24.228 fio: pid=69323, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:13:24.228 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3166208, buflen=4096 00:13:24.228 00:13:24.228 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69322: Sat Sep 28 08:51:02 2024 00:13:24.228 read: IOPS=3021, BW=11.8MiB/s (12.4MB/s)(41.2MiB/3494msec) 00:13:24.228 slat (usec): min=8, max=17331, avg=21.11, stdev=233.26 00:13:24.228 clat (usec): min=170, max=3542, avg=308.17, stdev=78.38 00:13:24.228 lat (usec): min=181, max=17589, avg=329.28, stdev=245.38 00:13:24.228 clat percentiles (usec): 00:13:24.228 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 229], 00:13:24.228 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 330], 00:13:24.228 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 363], 95.00th=[ 375], 00:13:24.228 | 99.00th=[ 412], 99.50th=[ 453], 99.90th=[ 619], 99.95th=[ 766], 00:13:24.228 | 99.99th=[ 3195] 00:13:24.228 bw ( KiB/s): min=10618, max=12609, per=22.53%, avg=11396.50, stdev=780.23, samples=6 00:13:24.228 iops : min= 2654, max= 3152, avg=2849.00, stdev=195.08, samples=6 00:13:24.228 lat (usec) : 250=20.96%, 500=78.75%, 750=0.22%, 1000=0.02% 00:13:24.228 lat (msec) : 2=0.01%, 4=0.03% 00:13:24.228 cpu : usr=1.03%, sys=4.70%, ctx=10561, majf=0, minf=1 00:13:24.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.228 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.228 issued rwts: total=10557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.228 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69323: Sat Sep 28 08:51:02 2024 00:13:24.228 read: IOPS=4391, BW=17.2MiB/s (18.0MB/s)(67.0MiB/3907msec) 00:13:24.228 slat (usec): min=8, max=12258, avg=17.07, stdev=175.64 00:13:24.228 clat (usec): min=170, max=3032, avg=209.30, stdev=62.34 00:13:24.228 lat (usec): min=182, max=12501, avg=226.37, stdev=188.63 00:13:24.228 clat percentiles (usec): 00:13:24.228 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 190], 00:13:24.228 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 206], 00:13:24.228 | 70.00th=[ 210], 80.00th=[ 219], 90.00th=[ 231], 95.00th=[ 249], 00:13:24.228 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 1045], 99.95th=[ 1745], 00:13:24.228 | 99.99th=[ 2671] 00:13:24.228 bw ( KiB/s): min=14019, max=18568, per=34.88%, avg=17639.29, stdev=1643.74, samples=7 00:13:24.228 iops : min= 3504, max= 4642, avg=4409.71, stdev=411.21, samples=7 00:13:24.228 lat (usec) : 250=95.23%, 500=4.55%, 750=0.09%, 1000=0.02% 00:13:24.228 lat (msec) : 2=0.09%, 4=0.02% 00:13:24.228 cpu : usr=1.41%, sys=5.33%, ctx=17170, majf=0, minf=1 00:13:24.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.228 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.228 issued rwts: total=17158,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.228 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69324: Sat Sep 28 08:51:02 2024 00:13:24.228 read: IOPS=3317, BW=13.0MiB/s (13.6MB/s)(42.0MiB/3239msec) 00:13:24.228 slat (usec): min=10, max=7611, avg=20.51, stdev=100.92 00:13:24.228 clat (usec): min=188, max=3174, avg=279.13, stdev=84.16 00:13:24.228 lat (usec): min=204, max=7975, avg=299.64, stdev=134.28 00:13:24.228 clat percentiles (usec): 00:13:24.228 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:13:24.228 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 247], 60.00th=[ 314], 00:13:24.228 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 375], 00:13:24.228 | 99.00th=[ 441], 99.50th=[ 515], 99.90th=[ 955], 99.95th=[ 1631], 00:13:24.228 | 99.99th=[ 2606] 00:13:24.228 bw ( KiB/s): min=10480, max=16904, per=26.65%, avg=13476.00, stdev=3060.84, samples=6 00:13:24.228 iops : min= 2620, max= 4226, avg=3369.00, stdev=765.21, samples=6 00:13:24.228 lat (usec) : 250=51.04%, 500=48.33%, 750=0.51%, 1000=0.03% 00:13:24.228 lat (msec) : 2=0.06%, 4=0.03% 00:13:24.228 cpu : usr=1.27%, sys=5.62%, ctx=10752, majf=0, minf=2 00:13:24.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.228 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.228 issued rwts: total=10747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.228 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69325: Sat Sep 28 08:51:02 2024 00:13:24.228 read: IOPS=3676, BW=14.4MiB/s (15.1MB/s)(42.7MiB/2975msec) 00:13:24.228 slat (nsec): min=8396, max=93013, avg=13960.51, stdev=4745.26 00:13:24.228 clat (usec): min=184, max=1022, avg=256.51, stdev=54.81 00:13:24.228 lat (usec): min=195, max=1036, avg=270.47, stdev=54.22 00:13:24.228 clat percentiles (usec): 00:13:24.228 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 208], 00:13:24.228 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 247], 00:13:24.228 | 70.00th=[ 306], 80.00th=[ 322], 90.00th=[ 334], 95.00th=[ 347], 00:13:24.228 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 392], 99.95th=[ 408], 00:13:24.228 | 99.99th=[ 644] 00:13:24.228 bw ( KiB/s): min=11856, max=17280, per=30.22%, avg=15284.80, stdev=2532.10, samples=5 00:13:24.228 iops : min= 2964, max= 4320, avg=3821.20, stdev=633.02, samples=5 00:13:24.228 lat (usec) : 250=60.79%, 500=39.16%, 750=0.04% 00:13:24.228 lat (msec) : 2=0.01% 00:13:24.228 cpu : usr=0.98%, sys=4.88%, ctx=10939, majf=0, minf=2 00:13:24.228 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:24.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.228 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.228 issued rwts: total=10938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.228 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:24.228 00:13:24.228 Run status group 0 (all jobs): 00:13:24.228 READ: bw=49.4MiB/s (51.8MB/s), 11.8MiB/s-17.2MiB/s (12.4MB/s-18.0MB/s), io=193MiB (202MB), run=2975-3907msec 00:13:24.228 00:13:24.228 Disk stats (read/write): 00:13:24.228 nvme0n1: ios=9970/0, merge=0/0, ticks=3071/0, in_queue=3071, util=95.08% 00:13:24.228 nvme0n2: ios=16989/0, merge=0/0, ticks=3606/0, in_queue=3606, util=95.73% 00:13:24.228 nvme0n3: ios=10389/0, merge=0/0, ticks=2924/0, in_queue=2924, util=96.46% 00:13:24.228 nvme0n4: ios=10629/0, merge=0/0, ticks=2685/0, in_queue=2685, util=96.79% 00:13:24.488 08:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:24.488 08:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:25.056 08:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:25.056 08:51:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:25.315 08:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:25.315 08:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:25.883 08:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:25.883 08:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:26.142 08:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:26.142 08:51:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:26.401 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:26.401 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 69282 00:13:26.401 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:26.401 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.401 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:26.401 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:13:26.401 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:26.401 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.401 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:26.401 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.401 nvmf hotplug test: fio failed as expected 00:13:26.401 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:13:26.401 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:26.401 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:26.401 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:26.969 rmmod nvme_tcp 00:13:26.969 rmmod nvme_fabrics 00:13:26.969 rmmod nvme_keyring 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 68887 ']' 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 68887 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 68887 ']' 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 68887 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68887 00:13:26.969 killing process with pid 68887 00:13:26.969 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:26.970 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:26.970 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68887' 00:13:26.970 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 68887 00:13:26.970 08:51:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 68887 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:27.907 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:28.165 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:28.165 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:28.165 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:28.165 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:28.165 08:51:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:28.165 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:28.165 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:28.165 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:28.165 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.165 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.165 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.165 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:13:28.165 00:13:28.165 real 0m22.421s 00:13:28.165 user 1m21.128s 00:13:28.165 sys 0m11.426s 00:13:28.165 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.165 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.165 ************************************ 00:13:28.165 END TEST nvmf_fio_target 00:13:28.165 ************************************ 00:13:28.165 08:51:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:28.165 08:51:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:28.165 08:51:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.166 08:51:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:28.166 ************************************ 00:13:28.166 START TEST nvmf_bdevio 00:13:28.166 ************************************ 00:13:28.166 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:28.425 * Looking for test storage... 00:13:28.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:28.425 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:28.425 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:28.425 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:13:28.425 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:28.425 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:28.425 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:28.425 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:28.425 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:28.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.426 --rc genhtml_branch_coverage=1 00:13:28.426 --rc genhtml_function_coverage=1 00:13:28.426 --rc genhtml_legend=1 00:13:28.426 --rc geninfo_all_blocks=1 00:13:28.426 --rc geninfo_unexecuted_blocks=1 00:13:28.426 00:13:28.426 ' 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:28.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.426 --rc genhtml_branch_coverage=1 00:13:28.426 --rc genhtml_function_coverage=1 00:13:28.426 --rc genhtml_legend=1 00:13:28.426 --rc geninfo_all_blocks=1 00:13:28.426 --rc geninfo_unexecuted_blocks=1 00:13:28.426 00:13:28.426 ' 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:28.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.426 --rc genhtml_branch_coverage=1 00:13:28.426 --rc genhtml_function_coverage=1 00:13:28.426 --rc genhtml_legend=1 00:13:28.426 --rc geninfo_all_blocks=1 00:13:28.426 --rc geninfo_unexecuted_blocks=1 00:13:28.426 00:13:28.426 ' 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:28.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.426 --rc genhtml_branch_coverage=1 00:13:28.426 --rc genhtml_function_coverage=1 00:13:28.426 --rc genhtml_legend=1 00:13:28.426 --rc geninfo_all_blocks=1 00:13:28.426 --rc geninfo_unexecuted_blocks=1 00:13:28.426 00:13:28.426 ' 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:28.426 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:13:28.426 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:28.427 Cannot find device "nvmf_init_br" 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:28.427 Cannot find device "nvmf_init_br2" 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:28.427 Cannot find device "nvmf_tgt_br" 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:28.427 Cannot find device "nvmf_tgt_br2" 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:28.427 Cannot find device "nvmf_init_br" 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:13:28.427 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:28.718 Cannot find device "nvmf_init_br2" 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:28.718 Cannot find device "nvmf_tgt_br" 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:28.718 Cannot find device "nvmf_tgt_br2" 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:28.718 Cannot find device "nvmf_br" 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:28.718 Cannot find device "nvmf_init_if" 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:28.718 Cannot find device "nvmf_init_if2" 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:28.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:28.718 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:28.718 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:28.719 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:28.719 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:28.719 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:28.719 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:28.719 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:28.719 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:28.719 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:28.719 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:28.719 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:28.719 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:28.982 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:28.982 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:13:28.982 00:13:28.982 --- 10.0.0.3 ping statistics --- 00:13:28.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.982 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:28.982 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:28.982 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:13:28.982 00:13:28.982 --- 10.0.0.4 ping statistics --- 00:13:28.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.982 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:28.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:13:28.982 00:13:28.982 --- 10.0.0.1 ping statistics --- 00:13:28.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.982 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:28.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:13:28.982 00:13:28.982 --- 10.0.0.2 ping statistics --- 00:13:28.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.982 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=69662 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 69662 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 69662 ']' 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:28.982 08:51:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:28.983 [2024-09-28 08:51:06.879770] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:28.983 [2024-09-28 08:51:06.879987] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.241 [2024-09-28 08:51:07.057515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.500 [2024-09-28 08:51:07.302840] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.500 [2024-09-28 08:51:07.302893] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.500 [2024-09-28 08:51:07.302927] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.500 [2024-09-28 08:51:07.302938] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.500 [2024-09-28 08:51:07.302949] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.500 [2024-09-28 08:51:07.303167] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:13:29.500 [2024-09-28 08:51:07.303318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:13:29.500 [2024-09-28 08:51:07.303888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.500 [2024-09-28 08:51:07.303910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:13:29.500 [2024-09-28 08:51:07.477319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:30.069 [2024-09-28 08:51:07.897247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:30.069 Malloc0 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.069 08:51:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:30.069 [2024-09-28 08:51:07.998551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:30.069 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.069 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:30.069 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:30.069 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:13:30.069 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:13:30.069 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:13:30.069 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:13:30.069 { 00:13:30.069 "params": { 00:13:30.069 "name": "Nvme$subsystem", 00:13:30.069 "trtype": "$TEST_TRANSPORT", 00:13:30.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:30.069 "adrfam": "ipv4", 00:13:30.069 "trsvcid": "$NVMF_PORT", 00:13:30.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:30.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:30.069 "hdgst": ${hdgst:-false}, 00:13:30.069 "ddgst": ${ddgst:-false} 00:13:30.069 }, 00:13:30.069 "method": "bdev_nvme_attach_controller" 00:13:30.069 } 00:13:30.069 EOF 00:13:30.069 )") 00:13:30.069 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:13:30.069 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:13:30.069 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:13:30.069 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:13:30.069 "params": { 00:13:30.069 "name": "Nvme1", 00:13:30.069 "trtype": "tcp", 00:13:30.069 "traddr": "10.0.0.3", 00:13:30.069 "adrfam": "ipv4", 00:13:30.069 "trsvcid": "4420", 00:13:30.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:30.069 "hdgst": false, 00:13:30.069 "ddgst": false 00:13:30.069 }, 00:13:30.069 "method": "bdev_nvme_attach_controller" 00:13:30.069 }' 00:13:30.329 [2024-09-28 08:51:08.116298] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:30.329 [2024-09-28 08:51:08.116481] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69698 ] 00:13:30.329 [2024-09-28 08:51:08.292576] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:30.588 [2024-09-28 08:51:08.529349] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.588 [2024-09-28 08:51:08.530883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.588 [2024-09-28 08:51:08.530913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.847 [2024-09-28 08:51:08.722471] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:31.106 I/O targets: 00:13:31.106 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:31.106 00:13:31.106 00:13:31.106 CUnit - A unit testing framework for C - Version 2.1-3 00:13:31.106 http://cunit.sourceforge.net/ 00:13:31.106 00:13:31.106 00:13:31.106 Suite: bdevio tests on: Nvme1n1 00:13:31.106 Test: blockdev write read block ...passed 00:13:31.106 Test: blockdev write zeroes read block ...passed 00:13:31.106 Test: blockdev write zeroes read no split ...passed 00:13:31.106 Test: blockdev write zeroes read split ...passed 00:13:31.106 Test: blockdev write zeroes read split partial ...passed 00:13:31.106 Test: blockdev reset ...[2024-09-28 08:51:08.975318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:31.106 [2024-09-28 08:51:08.975506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:13:31.106 [2024-09-28 08:51:08.996113] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:31.106 passed 00:13:31.106 Test: blockdev write read 8 blocks ...passed 00:13:31.106 Test: blockdev write read size > 128k ...passed 00:13:31.106 Test: blockdev write read invalid size ...passed 00:13:31.106 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:31.106 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:31.106 Test: blockdev write read max offset ...passed 00:13:31.106 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:31.106 Test: blockdev writev readv 8 blocks ...passed 00:13:31.106 Test: blockdev writev readv 30 x 1block ...passed 00:13:31.106 Test: blockdev writev readv block ...passed 00:13:31.106 Test: blockdev writev readv size > 128k ...passed 00:13:31.106 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:31.106 Test: blockdev comparev and writev ...[2024-09-28 08:51:09.007621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.106 [2024-09-28 08:51:09.007755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:31.106 [2024-09-28 08:51:09.007810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.106 [2024-09-28 08:51:09.007874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:31.106 [2024-09-28 08:51:09.008386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.106 [2024-09-28 08:51:09.008452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:31.106 [2024-09-28 08:51:09.008498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.106 [2024-09-28 08:51:09.008543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:31.106 [2024-09-28 08:51:09.009068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.106 [2024-09-28 08:51:09.009130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:31.106 [2024-09-28 08:51:09.009177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.106 [2024-09-28 08:51:09.009222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:31.106 [2024-09-28 08:51:09.009733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.106 [2024-09-28 08:51:09.009796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:31.106 [2024-09-28 08:51:09.009860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:31.106 [2024-09-28 08:51:09.009899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:31.106 passed 00:13:31.106 Test: blockdev nvme passthru rw ...passed 00:13:31.106 Test: blockdev nvme passthru vendor specific ...[2024-09-28 08:51:09.011307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:31.106 [2024-09-28 08:51:09.011381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:31.106 [2024-09-28 08:51:09.011598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:31.106 [2024-09-28 08:51:09.011664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:31.106 [2024-09-28 08:51:09.011877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:31.106 [2024-09-28 08:51:09.011942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:31.106 [2024-09-28 08:51:09.012137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:31.106 [2024-09-28 08:51:09.012199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:31.106 passed 00:13:31.106 Test: blockdev nvme admin passthru ...passed 00:13:31.106 Test: blockdev copy ...passed 00:13:31.106 00:13:31.106 Run Summary: Type Total Ran Passed Failed Inactive 00:13:31.106 suites 1 1 n/a 0 0 00:13:31.106 tests 23 23 23 0 0 00:13:31.106 asserts 152 152 152 0 n/a 00:13:31.106 00:13:31.106 Elapsed time = 0.298 seconds 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:32.485 rmmod nvme_tcp 00:13:32.485 rmmod nvme_fabrics 00:13:32.485 rmmod nvme_keyring 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 69662 ']' 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 69662 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 69662 ']' 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 69662 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69662 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69662' 00:13:32.485 killing process with pid 69662 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 69662 00:13:32.485 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 69662 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:13:33.864 00:13:33.864 real 0m5.547s 00:13:33.864 user 0m19.861s 00:13:33.864 sys 0m1.099s 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.864 ************************************ 00:13:33.864 END TEST nvmf_bdevio 00:13:33.864 ************************************ 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:33.864 ************************************ 00:13:33.864 END TEST nvmf_target_core 00:13:33.864 ************************************ 00:13:33.864 00:13:33.864 real 2m57.771s 00:13:33.864 user 7m52.526s 00:13:33.864 sys 0m55.496s 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:33.864 08:51:11 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:33.864 08:51:11 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:33.864 08:51:11 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.864 08:51:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:33.864 ************************************ 00:13:33.864 START TEST nvmf_target_extra 00:13:33.864 ************************************ 00:13:33.864 08:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:34.124 * Looking for test storage... 00:13:34.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:34.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.124 --rc genhtml_branch_coverage=1 00:13:34.124 --rc genhtml_function_coverage=1 00:13:34.124 --rc genhtml_legend=1 00:13:34.124 --rc geninfo_all_blocks=1 00:13:34.124 --rc geninfo_unexecuted_blocks=1 00:13:34.124 00:13:34.124 ' 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:34.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.124 --rc genhtml_branch_coverage=1 00:13:34.124 --rc genhtml_function_coverage=1 00:13:34.124 --rc genhtml_legend=1 00:13:34.124 --rc geninfo_all_blocks=1 00:13:34.124 --rc geninfo_unexecuted_blocks=1 00:13:34.124 00:13:34.124 ' 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:34.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.124 --rc genhtml_branch_coverage=1 00:13:34.124 --rc genhtml_function_coverage=1 00:13:34.124 --rc genhtml_legend=1 00:13:34.124 --rc geninfo_all_blocks=1 00:13:34.124 --rc geninfo_unexecuted_blocks=1 00:13:34.124 00:13:34.124 ' 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:34.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.124 --rc genhtml_branch_coverage=1 00:13:34.124 --rc genhtml_function_coverage=1 00:13:34.124 --rc genhtml_legend=1 00:13:34.124 --rc geninfo_all_blocks=1 00:13:34.124 --rc geninfo_unexecuted_blocks=1 00:13:34.124 00:13:34.124 ' 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.124 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:34.125 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:34.125 08:51:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:34.125 08:51:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:34.125 08:51:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:34.125 08:51:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:13:34.125 08:51:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:34.125 08:51:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:34.125 08:51:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:34.125 08:51:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:34.125 ************************************ 00:13:34.125 START TEST nvmf_auth_target 00:13:34.125 ************************************ 00:13:34.125 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:34.125 * Looking for test storage... 00:13:34.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:34.125 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:34.125 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:13:34.125 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:34.385 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:34.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.386 --rc genhtml_branch_coverage=1 00:13:34.386 --rc genhtml_function_coverage=1 00:13:34.386 --rc genhtml_legend=1 00:13:34.386 --rc geninfo_all_blocks=1 00:13:34.386 --rc geninfo_unexecuted_blocks=1 00:13:34.386 00:13:34.386 ' 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:34.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.386 --rc genhtml_branch_coverage=1 00:13:34.386 --rc genhtml_function_coverage=1 00:13:34.386 --rc genhtml_legend=1 00:13:34.386 --rc geninfo_all_blocks=1 00:13:34.386 --rc geninfo_unexecuted_blocks=1 00:13:34.386 00:13:34.386 ' 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:34.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.386 --rc genhtml_branch_coverage=1 00:13:34.386 --rc genhtml_function_coverage=1 00:13:34.386 --rc genhtml_legend=1 00:13:34.386 --rc geninfo_all_blocks=1 00:13:34.386 --rc geninfo_unexecuted_blocks=1 00:13:34.386 00:13:34.386 ' 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:34.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.386 --rc genhtml_branch_coverage=1 00:13:34.386 --rc genhtml_function_coverage=1 00:13:34.386 --rc genhtml_legend=1 00:13:34.386 --rc geninfo_all_blocks=1 00:13:34.386 --rc geninfo_unexecuted_blocks=1 00:13:34.386 00:13:34.386 ' 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:34.386 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:13:34.386 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:34.387 Cannot find device "nvmf_init_br" 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:34.387 Cannot find device "nvmf_init_br2" 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:34.387 Cannot find device "nvmf_tgt_br" 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:34.387 Cannot find device "nvmf_tgt_br2" 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:34.387 Cannot find device "nvmf_init_br" 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:34.387 Cannot find device "nvmf_init_br2" 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:34.387 Cannot find device "nvmf_tgt_br" 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:34.387 Cannot find device "nvmf_tgt_br2" 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:34.387 Cannot find device "nvmf_br" 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:34.387 Cannot find device "nvmf_init_if" 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:34.387 Cannot find device "nvmf_init_if2" 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:34.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:34.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:13:34.387 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:34.647 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:34.648 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:34.648 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:13:34.648 00:13:34.648 --- 10.0.0.3 ping statistics --- 00:13:34.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.648 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:34.648 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:34.648 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:13:34.648 00:13:34.648 --- 10.0.0.4 ping statistics --- 00:13:34.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.648 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:34.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:13:34.648 00:13:34.648 --- 10.0.0.1 ping statistics --- 00:13:34.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.648 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:34.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:13:34.648 00:13:34.648 --- 10.0.0.2 ping statistics --- 00:13:34.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.648 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=70036 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 70036 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70036 ']' 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.648 08:51:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=70068 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=54011beb7fa76c0cd4c12cd1dbec5fac71a1c022ee823497 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.AQW 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 54011beb7fa76c0cd4c12cd1dbec5fac71a1c022ee823497 0 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 54011beb7fa76c0cd4c12cd1dbec5fac71a1c022ee823497 0 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=54011beb7fa76c0cd4c12cd1dbec5fac71a1c022ee823497 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.AQW 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.AQW 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.AQW 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=2019c9eb027dfe98e650e2e07c4319133ba2653087729e4c2596e92d2e55df79 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.kxa 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 2019c9eb027dfe98e650e2e07c4319133ba2653087729e4c2596e92d2e55df79 3 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 2019c9eb027dfe98e650e2e07c4319133ba2653087729e4c2596e92d2e55df79 3 00:13:36.027 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=2019c9eb027dfe98e650e2e07c4319133ba2653087729e4c2596e92d2e55df79 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.kxa 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.kxa 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.kxa 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=75067b2ed9336a1d102c3d85f7b63fd5 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.Yap 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 75067b2ed9336a1d102c3d85f7b63fd5 1 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 75067b2ed9336a1d102c3d85f7b63fd5 1 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=75067b2ed9336a1d102c3d85f7b63fd5 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.Yap 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.Yap 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Yap 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=19da33e8aed8377dc164910ef2bdce3ac53e494bd5c46235 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.hHr 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 19da33e8aed8377dc164910ef2bdce3ac53e494bd5c46235 2 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 19da33e8aed8377dc164910ef2bdce3ac53e494bd5c46235 2 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=19da33e8aed8377dc164910ef2bdce3ac53e494bd5c46235 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:13:36.028 08:51:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:13:36.028 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.hHr 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.hHr 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.hHr 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=6e5a2e62f6433f772cdd435742df7b028576dfa99ef94851 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.CvW 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 6e5a2e62f6433f772cdd435742df7b028576dfa99ef94851 2 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 6e5a2e62f6433f772cdd435742df7b028576dfa99ef94851 2 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=6e5a2e62f6433f772cdd435742df7b028576dfa99ef94851 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.CvW 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.CvW 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.CvW 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=cc0f98182511e4b711c29b1972d7769c 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.IPW 00:13:36.286 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key cc0f98182511e4b711c29b1972d7769c 1 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 cc0f98182511e4b711c29b1972d7769c 1 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=cc0f98182511e4b711c29b1972d7769c 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.IPW 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.IPW 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.IPW 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=ccb97ad48a2222cdc476b52687f3b8098546a3667d7bf7d68e2b6e59596a9be9 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.6dB 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key ccb97ad48a2222cdc476b52687f3b8098546a3667d7bf7d68e2b6e59596a9be9 3 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 ccb97ad48a2222cdc476b52687f3b8098546a3667d7bf7d68e2b6e59596a9be9 3 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=ccb97ad48a2222cdc476b52687f3b8098546a3667d7bf7d68e2b6e59596a9be9 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.6dB 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.6dB 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.6dB 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 70036 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70036 ']' 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.287 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.854 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:36.854 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:36.854 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 70068 /var/tmp/host.sock 00:13:36.854 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 70068 ']' 00:13:36.854 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:36.854 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:36.854 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:36.854 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.854 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.114 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.114 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:37.114 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:37.114 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.114 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.114 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.114 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:37.114 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.AQW 00:13:37.114 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.114 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.114 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.114 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.AQW 00:13:37.114 08:51:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.AQW 00:13:37.374 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.kxa ]] 00:13:37.374 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kxa 00:13:37.374 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.374 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.374 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.374 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kxa 00:13:37.374 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kxa 00:13:37.633 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:37.633 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Yap 00:13:37.633 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.633 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.633 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.633 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Yap 00:13:37.633 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Yap 00:13:37.892 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.hHr ]] 00:13:37.892 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hHr 00:13:37.892 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.892 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.892 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.892 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hHr 00:13:37.892 08:51:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hHr 00:13:38.150 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:38.150 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.CvW 00:13:38.150 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.150 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.150 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.150 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.CvW 00:13:38.150 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.CvW 00:13:38.409 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.IPW ]] 00:13:38.409 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IPW 00:13:38.409 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.409 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.409 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.409 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IPW 00:13:38.409 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IPW 00:13:38.667 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:38.667 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.6dB 00:13:38.667 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.667 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.667 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.667 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.6dB 00:13:38.667 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.6dB 00:13:38.926 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:38.926 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:38.926 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:38.926 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.926 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:38.926 08:51:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:39.184 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:39.184 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.184 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:39.184 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:39.184 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:39.184 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.184 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.184 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.184 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.184 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.184 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.184 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.184 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.442 00:13:39.442 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.442 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.442 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.701 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.701 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.701 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.701 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.701 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.701 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.701 { 00:13:39.701 "cntlid": 1, 00:13:39.701 "qid": 0, 00:13:39.701 "state": "enabled", 00:13:39.701 "thread": "nvmf_tgt_poll_group_000", 00:13:39.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:13:39.701 "listen_address": { 00:13:39.701 "trtype": "TCP", 00:13:39.701 "adrfam": "IPv4", 00:13:39.701 "traddr": "10.0.0.3", 00:13:39.701 "trsvcid": "4420" 00:13:39.701 }, 00:13:39.701 "peer_address": { 00:13:39.701 "trtype": "TCP", 00:13:39.701 "adrfam": "IPv4", 00:13:39.701 "traddr": "10.0.0.1", 00:13:39.701 "trsvcid": "41050" 00:13:39.701 }, 00:13:39.701 "auth": { 00:13:39.701 "state": "completed", 00:13:39.701 "digest": "sha256", 00:13:39.701 "dhgroup": "null" 00:13:39.701 } 00:13:39.701 } 00:13:39.701 ]' 00:13:39.701 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.960 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:39.960 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.960 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:39.960 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.960 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.960 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.960 08:51:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.219 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:13:40.219 08:51:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:13:44.412 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.412 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:13:44.412 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.412 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.412 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.412 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.412 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:44.412 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:44.671 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:44.671 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.671 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:44.671 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:44.671 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:44.671 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.671 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:44.671 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.671 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.671 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.671 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:44.671 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:44.671 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:45.241 00:13:45.241 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.241 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.241 08:51:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.500 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.500 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.500 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.500 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.500 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.500 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.500 { 00:13:45.500 "cntlid": 3, 00:13:45.500 "qid": 0, 00:13:45.500 "state": "enabled", 00:13:45.500 "thread": "nvmf_tgt_poll_group_000", 00:13:45.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:13:45.500 "listen_address": { 00:13:45.500 "trtype": "TCP", 00:13:45.500 "adrfam": "IPv4", 00:13:45.500 "traddr": "10.0.0.3", 00:13:45.500 "trsvcid": "4420" 00:13:45.500 }, 00:13:45.500 "peer_address": { 00:13:45.500 "trtype": "TCP", 00:13:45.500 "adrfam": "IPv4", 00:13:45.500 "traddr": "10.0.0.1", 00:13:45.500 "trsvcid": "41076" 00:13:45.500 }, 00:13:45.500 "auth": { 00:13:45.500 "state": "completed", 00:13:45.500 "digest": "sha256", 00:13:45.500 "dhgroup": "null" 00:13:45.500 } 00:13:45.500 } 00:13:45.500 ]' 00:13:45.500 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.500 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:45.500 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.500 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:45.500 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.500 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.500 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.500 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.760 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:13:45.760 08:51:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:13:46.331 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.331 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:13:46.331 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.331 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.331 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.331 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.331 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:46.331 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:46.589 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:46.589 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.589 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:46.589 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:46.589 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:46.589 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.589 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.589 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.589 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.589 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.589 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.590 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.590 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.848 00:13:47.107 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.107 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.107 08:51:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.107 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.107 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.107 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.107 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.367 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.367 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.367 { 00:13:47.367 "cntlid": 5, 00:13:47.367 "qid": 0, 00:13:47.367 "state": "enabled", 00:13:47.367 "thread": "nvmf_tgt_poll_group_000", 00:13:47.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:13:47.367 "listen_address": { 00:13:47.367 "trtype": "TCP", 00:13:47.367 "adrfam": "IPv4", 00:13:47.367 "traddr": "10.0.0.3", 00:13:47.367 "trsvcid": "4420" 00:13:47.367 }, 00:13:47.367 "peer_address": { 00:13:47.367 "trtype": "TCP", 00:13:47.367 "adrfam": "IPv4", 00:13:47.367 "traddr": "10.0.0.1", 00:13:47.367 "trsvcid": "42622" 00:13:47.367 }, 00:13:47.367 "auth": { 00:13:47.367 "state": "completed", 00:13:47.367 "digest": "sha256", 00:13:47.367 "dhgroup": "null" 00:13:47.367 } 00:13:47.367 } 00:13:47.367 ]' 00:13:47.367 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.367 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:47.367 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.367 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:47.367 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.367 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.367 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.367 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.626 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:13:47.626 08:51:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:13:48.193 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.193 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:13:48.193 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.193 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.452 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.452 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.452 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:48.452 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:48.712 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:48.712 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.712 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:48.712 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:48.712 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:48.712 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.712 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:13:48.712 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.712 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.712 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.712 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:48.712 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:48.712 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:48.971 00:13:48.971 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.971 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.971 08:51:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.230 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.230 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.230 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.230 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.230 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.230 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.230 { 00:13:49.230 "cntlid": 7, 00:13:49.230 "qid": 0, 00:13:49.230 "state": "enabled", 00:13:49.230 "thread": "nvmf_tgt_poll_group_000", 00:13:49.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:13:49.230 "listen_address": { 00:13:49.230 "trtype": "TCP", 00:13:49.230 "adrfam": "IPv4", 00:13:49.230 "traddr": "10.0.0.3", 00:13:49.230 "trsvcid": "4420" 00:13:49.230 }, 00:13:49.230 "peer_address": { 00:13:49.230 "trtype": "TCP", 00:13:49.230 "adrfam": "IPv4", 00:13:49.230 "traddr": "10.0.0.1", 00:13:49.230 "trsvcid": "42648" 00:13:49.230 }, 00:13:49.230 "auth": { 00:13:49.230 "state": "completed", 00:13:49.230 "digest": "sha256", 00:13:49.230 "dhgroup": "null" 00:13:49.230 } 00:13:49.230 } 00:13:49.230 ]' 00:13:49.230 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.230 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.230 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.489 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:49.489 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.489 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.489 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.489 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.748 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:13:49.748 08:51:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:13:50.315 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.315 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:13:50.315 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.316 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.316 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.316 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:50.316 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.316 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:50.316 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:50.575 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:13:50.575 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.575 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:50.575 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:50.575 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:50.575 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.575 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.575 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.575 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.834 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.834 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.834 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:50.834 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.092 00:13:51.092 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.092 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.092 08:51:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.350 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.350 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.350 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.350 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.350 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.350 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.350 { 00:13:51.350 "cntlid": 9, 00:13:51.350 "qid": 0, 00:13:51.350 "state": "enabled", 00:13:51.350 "thread": "nvmf_tgt_poll_group_000", 00:13:51.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:13:51.350 "listen_address": { 00:13:51.350 "trtype": "TCP", 00:13:51.350 "adrfam": "IPv4", 00:13:51.350 "traddr": "10.0.0.3", 00:13:51.350 "trsvcid": "4420" 00:13:51.350 }, 00:13:51.350 "peer_address": { 00:13:51.350 "trtype": "TCP", 00:13:51.350 "adrfam": "IPv4", 00:13:51.350 "traddr": "10.0.0.1", 00:13:51.350 "trsvcid": "42680" 00:13:51.350 }, 00:13:51.350 "auth": { 00:13:51.350 "state": "completed", 00:13:51.350 "digest": "sha256", 00:13:51.350 "dhgroup": "ffdhe2048" 00:13:51.350 } 00:13:51.350 } 00:13:51.350 ]' 00:13:51.350 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.350 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.350 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.614 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:51.614 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.614 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.614 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.614 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.891 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:13:51.891 08:51:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:13:52.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:13:52.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.458 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:52.459 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:52.717 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:13:52.717 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.717 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:52.717 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:52.717 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:52.717 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.717 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.717 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.717 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.717 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.717 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.717 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.717 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.975 00:13:53.234 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.234 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.234 08:51:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.493 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.493 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.493 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.493 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.493 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.493 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.493 { 00:13:53.493 "cntlid": 11, 00:13:53.493 "qid": 0, 00:13:53.493 "state": "enabled", 00:13:53.493 "thread": "nvmf_tgt_poll_group_000", 00:13:53.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:13:53.493 "listen_address": { 00:13:53.493 "trtype": "TCP", 00:13:53.493 "adrfam": "IPv4", 00:13:53.493 "traddr": "10.0.0.3", 00:13:53.493 "trsvcid": "4420" 00:13:53.493 }, 00:13:53.493 "peer_address": { 00:13:53.493 "trtype": "TCP", 00:13:53.493 "adrfam": "IPv4", 00:13:53.493 "traddr": "10.0.0.1", 00:13:53.493 "trsvcid": "42706" 00:13:53.493 }, 00:13:53.493 "auth": { 00:13:53.493 "state": "completed", 00:13:53.493 "digest": "sha256", 00:13:53.493 "dhgroup": "ffdhe2048" 00:13:53.493 } 00:13:53.493 } 00:13:53.493 ]' 00:13:53.493 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.493 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:53.493 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.493 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:53.493 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.493 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.493 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.493 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.751 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:13:53.751 08:51:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:13:54.689 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.689 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:13:54.689 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.689 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.689 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.689 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.689 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:54.689 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:54.948 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:13:54.948 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.948 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:54.948 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:54.948 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:54.948 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.948 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.948 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.948 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.948 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.948 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.948 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.948 08:51:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:55.207 00:13:55.207 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.207 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.207 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.466 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.466 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.466 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.466 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.466 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.466 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.466 { 00:13:55.466 "cntlid": 13, 00:13:55.466 "qid": 0, 00:13:55.466 "state": "enabled", 00:13:55.466 "thread": "nvmf_tgt_poll_group_000", 00:13:55.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:13:55.466 "listen_address": { 00:13:55.466 "trtype": "TCP", 00:13:55.466 "adrfam": "IPv4", 00:13:55.467 "traddr": "10.0.0.3", 00:13:55.467 "trsvcid": "4420" 00:13:55.467 }, 00:13:55.467 "peer_address": { 00:13:55.467 "trtype": "TCP", 00:13:55.467 "adrfam": "IPv4", 00:13:55.467 "traddr": "10.0.0.1", 00:13:55.467 "trsvcid": "42748" 00:13:55.467 }, 00:13:55.467 "auth": { 00:13:55.467 "state": "completed", 00:13:55.467 "digest": "sha256", 00:13:55.467 "dhgroup": "ffdhe2048" 00:13:55.467 } 00:13:55.467 } 00:13:55.467 ]' 00:13:55.467 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.467 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.467 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.467 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:55.467 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.726 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.726 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.726 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.985 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:13:55.985 08:51:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:13:56.553 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.553 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:13:56.553 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.553 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.553 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.553 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.553 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:56.553 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:56.812 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:13:56.812 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.812 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:56.812 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:56.812 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:56.812 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.812 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:13:56.812 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.812 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.812 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.812 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:56.812 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.812 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:57.071 00:13:57.071 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.071 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.071 08:51:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.329 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.329 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.329 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.329 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.329 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.329 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.329 { 00:13:57.329 "cntlid": 15, 00:13:57.329 "qid": 0, 00:13:57.329 "state": "enabled", 00:13:57.329 "thread": "nvmf_tgt_poll_group_000", 00:13:57.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:13:57.329 "listen_address": { 00:13:57.329 "trtype": "TCP", 00:13:57.329 "adrfam": "IPv4", 00:13:57.329 "traddr": "10.0.0.3", 00:13:57.329 "trsvcid": "4420" 00:13:57.329 }, 00:13:57.329 "peer_address": { 00:13:57.329 "trtype": "TCP", 00:13:57.329 "adrfam": "IPv4", 00:13:57.329 "traddr": "10.0.0.1", 00:13:57.329 "trsvcid": "55912" 00:13:57.329 }, 00:13:57.329 "auth": { 00:13:57.329 "state": "completed", 00:13:57.330 "digest": "sha256", 00:13:57.330 "dhgroup": "ffdhe2048" 00:13:57.330 } 00:13:57.330 } 00:13:57.330 ]' 00:13:57.330 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.330 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:57.330 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.330 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:57.330 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.588 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.588 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.588 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.588 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:13:57.588 08:51:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:13:58.154 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.414 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:13:58.414 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.414 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.414 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.414 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:58.414 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.414 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:58.414 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:58.673 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:58.673 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.673 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:58.673 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:58.673 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:58.673 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.673 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.673 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.673 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.673 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.673 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.673 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.673 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.931 00:13:58.931 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.931 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.931 08:51:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.191 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.191 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.191 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.191 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.191 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.191 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.191 { 00:13:59.191 "cntlid": 17, 00:13:59.191 "qid": 0, 00:13:59.191 "state": "enabled", 00:13:59.191 "thread": "nvmf_tgt_poll_group_000", 00:13:59.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:13:59.191 "listen_address": { 00:13:59.191 "trtype": "TCP", 00:13:59.191 "adrfam": "IPv4", 00:13:59.191 "traddr": "10.0.0.3", 00:13:59.191 "trsvcid": "4420" 00:13:59.191 }, 00:13:59.191 "peer_address": { 00:13:59.191 "trtype": "TCP", 00:13:59.191 "adrfam": "IPv4", 00:13:59.191 "traddr": "10.0.0.1", 00:13:59.191 "trsvcid": "55956" 00:13:59.191 }, 00:13:59.191 "auth": { 00:13:59.191 "state": "completed", 00:13:59.191 "digest": "sha256", 00:13:59.191 "dhgroup": "ffdhe3072" 00:13:59.191 } 00:13:59.191 } 00:13:59.191 ]' 00:13:59.191 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.191 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:59.191 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.451 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:59.451 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.451 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.451 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.451 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.710 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:13:59.710 08:51:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:00.278 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.278 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:00.278 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.278 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.278 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.278 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.278 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:00.278 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:00.537 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:00.537 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.537 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:00.537 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:00.537 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:00.537 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.537 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.537 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.537 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.537 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.537 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.537 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.537 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.796 00:14:00.796 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:00.796 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.796 08:51:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.054 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.054 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.054 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.054 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.314 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.314 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.314 { 00:14:01.314 "cntlid": 19, 00:14:01.314 "qid": 0, 00:14:01.314 "state": "enabled", 00:14:01.314 "thread": "nvmf_tgt_poll_group_000", 00:14:01.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:01.314 "listen_address": { 00:14:01.314 "trtype": "TCP", 00:14:01.314 "adrfam": "IPv4", 00:14:01.314 "traddr": "10.0.0.3", 00:14:01.314 "trsvcid": "4420" 00:14:01.314 }, 00:14:01.314 "peer_address": { 00:14:01.314 "trtype": "TCP", 00:14:01.314 "adrfam": "IPv4", 00:14:01.314 "traddr": "10.0.0.1", 00:14:01.314 "trsvcid": "55964" 00:14:01.314 }, 00:14:01.314 "auth": { 00:14:01.314 "state": "completed", 00:14:01.314 "digest": "sha256", 00:14:01.314 "dhgroup": "ffdhe3072" 00:14:01.314 } 00:14:01.314 } 00:14:01.314 ]' 00:14:01.314 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.314 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.314 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.314 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:01.314 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.314 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.314 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.314 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.573 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:14:01.573 08:51:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:14:02.184 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.184 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:02.184 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.184 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.184 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.184 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.184 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:02.185 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:02.444 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:02.444 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.444 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:02.444 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:02.444 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:02.444 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.444 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.444 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.444 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.444 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.444 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.444 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.444 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.703 00:14:02.703 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:02.703 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.703 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.271 08:51:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.271 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.271 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.271 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.271 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.271 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.271 { 00:14:03.271 "cntlid": 21, 00:14:03.271 "qid": 0, 00:14:03.271 "state": "enabled", 00:14:03.271 "thread": "nvmf_tgt_poll_group_000", 00:14:03.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:03.271 "listen_address": { 00:14:03.271 "trtype": "TCP", 00:14:03.271 "adrfam": "IPv4", 00:14:03.271 "traddr": "10.0.0.3", 00:14:03.271 "trsvcid": "4420" 00:14:03.271 }, 00:14:03.271 "peer_address": { 00:14:03.271 "trtype": "TCP", 00:14:03.271 "adrfam": "IPv4", 00:14:03.271 "traddr": "10.0.0.1", 00:14:03.271 "trsvcid": "55988" 00:14:03.271 }, 00:14:03.271 "auth": { 00:14:03.271 "state": "completed", 00:14:03.271 "digest": "sha256", 00:14:03.271 "dhgroup": "ffdhe3072" 00:14:03.271 } 00:14:03.271 } 00:14:03.271 ]' 00:14:03.271 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.271 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:03.271 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.271 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:03.271 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.271 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.271 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.271 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.530 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:14:03.530 08:51:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:14:04.098 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.098 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:04.098 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.098 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.098 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.098 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.098 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:04.098 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:04.357 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:04.357 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.357 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:04.357 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:04.357 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:04.357 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.357 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:14:04.357 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.357 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.616 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.616 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:04.616 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:04.616 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:04.874 00:14:04.874 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.874 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.874 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.133 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.133 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.133 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.133 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.133 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.133 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.133 { 00:14:05.133 "cntlid": 23, 00:14:05.133 "qid": 0, 00:14:05.133 "state": "enabled", 00:14:05.133 "thread": "nvmf_tgt_poll_group_000", 00:14:05.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:05.133 "listen_address": { 00:14:05.133 "trtype": "TCP", 00:14:05.133 "adrfam": "IPv4", 00:14:05.133 "traddr": "10.0.0.3", 00:14:05.133 "trsvcid": "4420" 00:14:05.133 }, 00:14:05.133 "peer_address": { 00:14:05.133 "trtype": "TCP", 00:14:05.133 "adrfam": "IPv4", 00:14:05.133 "traddr": "10.0.0.1", 00:14:05.133 "trsvcid": "56008" 00:14:05.133 }, 00:14:05.133 "auth": { 00:14:05.133 "state": "completed", 00:14:05.133 "digest": "sha256", 00:14:05.133 "dhgroup": "ffdhe3072" 00:14:05.133 } 00:14:05.133 } 00:14:05.133 ]' 00:14:05.133 08:51:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.133 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.133 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.133 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:05.133 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.392 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.392 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.392 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.650 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:14:05.651 08:51:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:14:06.218 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.218 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:06.218 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.218 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.218 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.218 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:06.218 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.218 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:06.218 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:06.477 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:06.477 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.477 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:06.477 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:06.477 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:06.477 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.477 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.477 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.477 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.477 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.477 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.477 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.477 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.736 00:14:06.736 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.736 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.736 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.995 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.995 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.995 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.995 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.995 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.995 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.995 { 00:14:06.995 "cntlid": 25, 00:14:06.995 "qid": 0, 00:14:06.995 "state": "enabled", 00:14:06.995 "thread": "nvmf_tgt_poll_group_000", 00:14:06.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:06.995 "listen_address": { 00:14:06.995 "trtype": "TCP", 00:14:06.995 "adrfam": "IPv4", 00:14:06.995 "traddr": "10.0.0.3", 00:14:06.995 "trsvcid": "4420" 00:14:06.995 }, 00:14:06.995 "peer_address": { 00:14:06.995 "trtype": "TCP", 00:14:06.995 "adrfam": "IPv4", 00:14:06.995 "traddr": "10.0.0.1", 00:14:06.995 "trsvcid": "57270" 00:14:06.995 }, 00:14:06.995 "auth": { 00:14:06.995 "state": "completed", 00:14:06.995 "digest": "sha256", 00:14:06.995 "dhgroup": "ffdhe4096" 00:14:06.995 } 00:14:06.995 } 00:14:06.995 ]' 00:14:06.995 08:51:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.254 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.254 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.254 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:07.254 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.254 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.254 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.254 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.512 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:07.512 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:08.078 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.078 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:08.078 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.078 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.078 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.078 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:08.078 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:08.078 08:51:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:08.337 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:08.337 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.337 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:08.337 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:08.337 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:08.337 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.337 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.337 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.337 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.337 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.337 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.337 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.337 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.595 00:14:08.853 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.853 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.853 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.853 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.853 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.853 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.853 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.111 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.111 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.111 { 00:14:09.111 "cntlid": 27, 00:14:09.111 "qid": 0, 00:14:09.111 "state": "enabled", 00:14:09.111 "thread": "nvmf_tgt_poll_group_000", 00:14:09.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:09.111 "listen_address": { 00:14:09.111 "trtype": "TCP", 00:14:09.111 "adrfam": "IPv4", 00:14:09.111 "traddr": "10.0.0.3", 00:14:09.111 "trsvcid": "4420" 00:14:09.111 }, 00:14:09.111 "peer_address": { 00:14:09.111 "trtype": "TCP", 00:14:09.111 "adrfam": "IPv4", 00:14:09.111 "traddr": "10.0.0.1", 00:14:09.111 "trsvcid": "57286" 00:14:09.111 }, 00:14:09.111 "auth": { 00:14:09.111 "state": "completed", 00:14:09.111 "digest": "sha256", 00:14:09.111 "dhgroup": "ffdhe4096" 00:14:09.111 } 00:14:09.111 } 00:14:09.111 ]' 00:14:09.111 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.111 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:09.111 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.111 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:09.111 08:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.111 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.111 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.111 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.369 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:14:09.369 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:14:10.006 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.006 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:10.006 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.006 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.006 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.006 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.006 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:10.006 08:51:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:10.265 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:10.265 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.265 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:10.265 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:10.265 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:10.265 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.265 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.265 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.265 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.265 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.265 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.265 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.265 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.523 00:14:10.523 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.523 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.524 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.782 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.782 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.782 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.782 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.782 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.782 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.782 { 00:14:10.782 "cntlid": 29, 00:14:10.782 "qid": 0, 00:14:10.782 "state": "enabled", 00:14:10.782 "thread": "nvmf_tgt_poll_group_000", 00:14:10.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:10.782 "listen_address": { 00:14:10.782 "trtype": "TCP", 00:14:10.782 "adrfam": "IPv4", 00:14:10.782 "traddr": "10.0.0.3", 00:14:10.782 "trsvcid": "4420" 00:14:10.782 }, 00:14:10.782 "peer_address": { 00:14:10.782 "trtype": "TCP", 00:14:10.782 "adrfam": "IPv4", 00:14:10.782 "traddr": "10.0.0.1", 00:14:10.782 "trsvcid": "57308" 00:14:10.782 }, 00:14:10.782 "auth": { 00:14:10.782 "state": "completed", 00:14:10.782 "digest": "sha256", 00:14:10.782 "dhgroup": "ffdhe4096" 00:14:10.782 } 00:14:10.782 } 00:14:10.782 ]' 00:14:10.782 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.040 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:11.040 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.040 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:11.040 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.040 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.040 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.040 08:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.299 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:14:11.299 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:14:12.262 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.262 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:12.262 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.262 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.262 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.262 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.262 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:12.262 08:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:12.262 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:12.262 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.262 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:12.262 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:12.262 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:12.262 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.262 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:14:12.262 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.262 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.262 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.262 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:12.262 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:12.262 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:12.829 00:14:12.829 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.829 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.829 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.089 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.089 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.089 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.089 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.089 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.089 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.089 { 00:14:13.089 "cntlid": 31, 00:14:13.089 "qid": 0, 00:14:13.089 "state": "enabled", 00:14:13.089 "thread": "nvmf_tgt_poll_group_000", 00:14:13.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:13.089 "listen_address": { 00:14:13.089 "trtype": "TCP", 00:14:13.089 "adrfam": "IPv4", 00:14:13.089 "traddr": "10.0.0.3", 00:14:13.089 "trsvcid": "4420" 00:14:13.089 }, 00:14:13.089 "peer_address": { 00:14:13.089 "trtype": "TCP", 00:14:13.089 "adrfam": "IPv4", 00:14:13.089 "traddr": "10.0.0.1", 00:14:13.089 "trsvcid": "57334" 00:14:13.089 }, 00:14:13.089 "auth": { 00:14:13.089 "state": "completed", 00:14:13.089 "digest": "sha256", 00:14:13.089 "dhgroup": "ffdhe4096" 00:14:13.089 } 00:14:13.089 } 00:14:13.089 ]' 00:14:13.089 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.089 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.089 08:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.089 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:13.089 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.089 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.089 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.089 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.656 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:14:13.656 08:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:14:14.224 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.224 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:14.224 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.224 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.224 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.224 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:14.224 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.224 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:14.224 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:14.483 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:14.483 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.483 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:14.483 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:14.483 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:14.483 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.483 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.483 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.483 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.483 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.483 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.483 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.483 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.050 00:14:15.050 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.050 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.050 08:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.309 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.309 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.309 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.309 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.309 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.309 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.309 { 00:14:15.309 "cntlid": 33, 00:14:15.309 "qid": 0, 00:14:15.309 "state": "enabled", 00:14:15.309 "thread": "nvmf_tgt_poll_group_000", 00:14:15.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:15.309 "listen_address": { 00:14:15.309 "trtype": "TCP", 00:14:15.309 "adrfam": "IPv4", 00:14:15.309 "traddr": "10.0.0.3", 00:14:15.309 "trsvcid": "4420" 00:14:15.309 }, 00:14:15.309 "peer_address": { 00:14:15.309 "trtype": "TCP", 00:14:15.309 "adrfam": "IPv4", 00:14:15.309 "traddr": "10.0.0.1", 00:14:15.309 "trsvcid": "57364" 00:14:15.309 }, 00:14:15.309 "auth": { 00:14:15.309 "state": "completed", 00:14:15.309 "digest": "sha256", 00:14:15.309 "dhgroup": "ffdhe6144" 00:14:15.309 } 00:14:15.309 } 00:14:15.310 ]' 00:14:15.310 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.310 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.310 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.310 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:15.310 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.569 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.569 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.569 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.828 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:15.829 08:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:16.397 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.397 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:16.397 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.397 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.397 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.397 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.397 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:16.397 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:16.655 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:16.655 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.655 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:16.655 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:16.655 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:16.655 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.655 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.655 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.656 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.656 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.656 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.656 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.656 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.223 00:14:17.223 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.223 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.223 08:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.223 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.223 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.223 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.223 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.223 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.223 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.223 { 00:14:17.223 "cntlid": 35, 00:14:17.223 "qid": 0, 00:14:17.223 "state": "enabled", 00:14:17.223 "thread": "nvmf_tgt_poll_group_000", 00:14:17.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:17.223 "listen_address": { 00:14:17.223 "trtype": "TCP", 00:14:17.223 "adrfam": "IPv4", 00:14:17.223 "traddr": "10.0.0.3", 00:14:17.223 "trsvcid": "4420" 00:14:17.223 }, 00:14:17.223 "peer_address": { 00:14:17.223 "trtype": "TCP", 00:14:17.223 "adrfam": "IPv4", 00:14:17.223 "traddr": "10.0.0.1", 00:14:17.223 "trsvcid": "42996" 00:14:17.223 }, 00:14:17.223 "auth": { 00:14:17.223 "state": "completed", 00:14:17.224 "digest": "sha256", 00:14:17.224 "dhgroup": "ffdhe6144" 00:14:17.224 } 00:14:17.224 } 00:14:17.224 ]' 00:14:17.224 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.483 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.483 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.483 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:17.483 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.483 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.483 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.483 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.742 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:14:17.742 08:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.679 08:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.246 00:14:19.246 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.246 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.246 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.505 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.505 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.505 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.505 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.505 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.505 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.505 { 00:14:19.505 "cntlid": 37, 00:14:19.505 "qid": 0, 00:14:19.505 "state": "enabled", 00:14:19.505 "thread": "nvmf_tgt_poll_group_000", 00:14:19.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:19.505 "listen_address": { 00:14:19.505 "trtype": "TCP", 00:14:19.505 "adrfam": "IPv4", 00:14:19.505 "traddr": "10.0.0.3", 00:14:19.505 "trsvcid": "4420" 00:14:19.505 }, 00:14:19.505 "peer_address": { 00:14:19.505 "trtype": "TCP", 00:14:19.505 "adrfam": "IPv4", 00:14:19.505 "traddr": "10.0.0.1", 00:14:19.505 "trsvcid": "43016" 00:14:19.505 }, 00:14:19.505 "auth": { 00:14:19.505 "state": "completed", 00:14:19.505 "digest": "sha256", 00:14:19.505 "dhgroup": "ffdhe6144" 00:14:19.505 } 00:14:19.505 } 00:14:19.505 ]' 00:14:19.505 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.505 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.505 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.764 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:19.764 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.764 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.764 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.764 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.022 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:14:20.022 08:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:14:20.589 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.847 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:20.847 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.847 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.847 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.847 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.847 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:20.847 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:21.106 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:21.106 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.106 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:21.106 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:21.106 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:21.106 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.106 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:14:21.106 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.106 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.106 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.106 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:21.106 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.106 08:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.673 00:14:21.674 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.674 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.674 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.674 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.674 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.674 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.674 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.674 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.674 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.674 { 00:14:21.674 "cntlid": 39, 00:14:21.674 "qid": 0, 00:14:21.674 "state": "enabled", 00:14:21.674 "thread": "nvmf_tgt_poll_group_000", 00:14:21.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:21.674 "listen_address": { 00:14:21.674 "trtype": "TCP", 00:14:21.674 "adrfam": "IPv4", 00:14:21.674 "traddr": "10.0.0.3", 00:14:21.674 "trsvcid": "4420" 00:14:21.674 }, 00:14:21.674 "peer_address": { 00:14:21.674 "trtype": "TCP", 00:14:21.674 "adrfam": "IPv4", 00:14:21.674 "traddr": "10.0.0.1", 00:14:21.674 "trsvcid": "43044" 00:14:21.674 }, 00:14:21.674 "auth": { 00:14:21.674 "state": "completed", 00:14:21.674 "digest": "sha256", 00:14:21.674 "dhgroup": "ffdhe6144" 00:14:21.674 } 00:14:21.674 } 00:14:21.674 ]' 00:14:21.674 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.932 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.932 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.932 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:21.933 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.933 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.933 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.933 08:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.192 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:14:22.192 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:14:23.129 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.129 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:23.129 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.129 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.129 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.129 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.129 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.129 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:23.129 08:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:23.129 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:23.129 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.129 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:23.129 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:23.129 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:23.129 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.129 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.129 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.129 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.129 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.129 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.129 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.129 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.706 00:14:23.706 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.706 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.706 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.965 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.965 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.965 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.965 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.965 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.965 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.965 { 00:14:23.965 "cntlid": 41, 00:14:23.965 "qid": 0, 00:14:23.965 "state": "enabled", 00:14:23.965 "thread": "nvmf_tgt_poll_group_000", 00:14:23.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:23.965 "listen_address": { 00:14:23.965 "trtype": "TCP", 00:14:23.965 "adrfam": "IPv4", 00:14:23.965 "traddr": "10.0.0.3", 00:14:23.965 "trsvcid": "4420" 00:14:23.965 }, 00:14:23.965 "peer_address": { 00:14:23.965 "trtype": "TCP", 00:14:23.965 "adrfam": "IPv4", 00:14:23.965 "traddr": "10.0.0.1", 00:14:23.965 "trsvcid": "43072" 00:14:23.965 }, 00:14:23.965 "auth": { 00:14:23.965 "state": "completed", 00:14:23.965 "digest": "sha256", 00:14:23.965 "dhgroup": "ffdhe8192" 00:14:23.965 } 00:14:23.965 } 00:14:23.965 ]' 00:14:23.965 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.225 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.225 08:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.225 08:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:24.225 08:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.225 08:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.225 08:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.225 08:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.486 08:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:24.486 08:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:25.054 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.054 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:25.054 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.054 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.313 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.313 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.313 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:25.313 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:25.572 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:25.572 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.572 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.572 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:25.572 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:25.572 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.572 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.572 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.572 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.572 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.572 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.572 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.572 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.141 00:14:26.141 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.141 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.141 08:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.400 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.400 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.400 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.400 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.400 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.400 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.400 { 00:14:26.400 "cntlid": 43, 00:14:26.400 "qid": 0, 00:14:26.400 "state": "enabled", 00:14:26.400 "thread": "nvmf_tgt_poll_group_000", 00:14:26.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:26.400 "listen_address": { 00:14:26.400 "trtype": "TCP", 00:14:26.400 "adrfam": "IPv4", 00:14:26.400 "traddr": "10.0.0.3", 00:14:26.400 "trsvcid": "4420" 00:14:26.400 }, 00:14:26.400 "peer_address": { 00:14:26.400 "trtype": "TCP", 00:14:26.400 "adrfam": "IPv4", 00:14:26.400 "traddr": "10.0.0.1", 00:14:26.400 "trsvcid": "43110" 00:14:26.400 }, 00:14:26.400 "auth": { 00:14:26.400 "state": "completed", 00:14:26.400 "digest": "sha256", 00:14:26.400 "dhgroup": "ffdhe8192" 00:14:26.400 } 00:14:26.400 } 00:14:26.400 ]' 00:14:26.400 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.400 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.401 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.401 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:26.401 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.660 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.660 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.660 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.660 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:14:26.660 08:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:14:27.228 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.487 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:27.487 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.487 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.487 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.487 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.487 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:27.487 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:27.747 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:27.747 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.747 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.747 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:27.747 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:27.747 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.747 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.747 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.747 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.747 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.747 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.747 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.747 08:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.316 00:14:28.316 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.316 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.316 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.575 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.575 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.575 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.575 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.575 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.575 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.575 { 00:14:28.575 "cntlid": 45, 00:14:28.575 "qid": 0, 00:14:28.575 "state": "enabled", 00:14:28.575 "thread": "nvmf_tgt_poll_group_000", 00:14:28.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:28.575 "listen_address": { 00:14:28.575 "trtype": "TCP", 00:14:28.575 "adrfam": "IPv4", 00:14:28.575 "traddr": "10.0.0.3", 00:14:28.575 "trsvcid": "4420" 00:14:28.575 }, 00:14:28.575 "peer_address": { 00:14:28.575 "trtype": "TCP", 00:14:28.575 "adrfam": "IPv4", 00:14:28.575 "traddr": "10.0.0.1", 00:14:28.575 "trsvcid": "53972" 00:14:28.575 }, 00:14:28.575 "auth": { 00:14:28.575 "state": "completed", 00:14:28.575 "digest": "sha256", 00:14:28.575 "dhgroup": "ffdhe8192" 00:14:28.575 } 00:14:28.575 } 00:14:28.575 ]' 00:14:28.575 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.575 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.575 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.575 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:28.575 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.575 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.575 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.575 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.834 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:14:28.834 08:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:14:29.769 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.770 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:29.770 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.770 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.770 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.770 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.770 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:29.770 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:30.028 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:30.028 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.028 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:30.028 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:30.028 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:30.028 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.028 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:14:30.028 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.028 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.028 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.028 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:30.028 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:30.028 08:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:30.594 00:14:30.594 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.594 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.594 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.853 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.853 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.853 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.853 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.853 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.853 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.853 { 00:14:30.853 "cntlid": 47, 00:14:30.853 "qid": 0, 00:14:30.853 "state": "enabled", 00:14:30.853 "thread": "nvmf_tgt_poll_group_000", 00:14:30.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:30.853 "listen_address": { 00:14:30.853 "trtype": "TCP", 00:14:30.853 "adrfam": "IPv4", 00:14:30.853 "traddr": "10.0.0.3", 00:14:30.853 "trsvcid": "4420" 00:14:30.853 }, 00:14:30.853 "peer_address": { 00:14:30.853 "trtype": "TCP", 00:14:30.853 "adrfam": "IPv4", 00:14:30.853 "traddr": "10.0.0.1", 00:14:30.853 "trsvcid": "53998" 00:14:30.853 }, 00:14:30.853 "auth": { 00:14:30.853 "state": "completed", 00:14:30.853 "digest": "sha256", 00:14:30.853 "dhgroup": "ffdhe8192" 00:14:30.853 } 00:14:30.853 } 00:14:30.853 ]' 00:14:30.853 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.853 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.853 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.111 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:31.111 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.111 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.111 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.111 08:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.369 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:14:31.369 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:14:31.936 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.936 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:31.936 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.936 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.936 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.936 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:31.936 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:31.936 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.936 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:31.936 08:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:32.194 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:32.195 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.195 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:32.195 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:32.195 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:32.195 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.195 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.195 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.195 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.195 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.195 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.195 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.195 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.761 00:14:32.761 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.761 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.761 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.761 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.761 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.761 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.761 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.020 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.020 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.020 { 00:14:33.020 "cntlid": 49, 00:14:33.020 "qid": 0, 00:14:33.020 "state": "enabled", 00:14:33.020 "thread": "nvmf_tgt_poll_group_000", 00:14:33.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:33.020 "listen_address": { 00:14:33.020 "trtype": "TCP", 00:14:33.020 "adrfam": "IPv4", 00:14:33.020 "traddr": "10.0.0.3", 00:14:33.020 "trsvcid": "4420" 00:14:33.020 }, 00:14:33.020 "peer_address": { 00:14:33.020 "trtype": "TCP", 00:14:33.020 "adrfam": "IPv4", 00:14:33.020 "traddr": "10.0.0.1", 00:14:33.020 "trsvcid": "54034" 00:14:33.020 }, 00:14:33.020 "auth": { 00:14:33.020 "state": "completed", 00:14:33.020 "digest": "sha384", 00:14:33.020 "dhgroup": "null" 00:14:33.020 } 00:14:33.020 } 00:14:33.020 ]' 00:14:33.020 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.020 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:33.020 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.020 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:33.020 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.020 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.020 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.020 08:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.278 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:33.278 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:34.231 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.231 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:34.231 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.231 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.231 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.231 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.231 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:34.231 08:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:34.490 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:34.490 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.490 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:34.490 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:34.490 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:34.490 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.490 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.490 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.490 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.490 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.490 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.490 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.490 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.749 00:14:34.749 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.749 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.749 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.008 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.008 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.008 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.008 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.008 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.008 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.008 { 00:14:35.008 "cntlid": 51, 00:14:35.008 "qid": 0, 00:14:35.008 "state": "enabled", 00:14:35.008 "thread": "nvmf_tgt_poll_group_000", 00:14:35.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:35.008 "listen_address": { 00:14:35.008 "trtype": "TCP", 00:14:35.008 "adrfam": "IPv4", 00:14:35.008 "traddr": "10.0.0.3", 00:14:35.008 "trsvcid": "4420" 00:14:35.008 }, 00:14:35.008 "peer_address": { 00:14:35.008 "trtype": "TCP", 00:14:35.008 "adrfam": "IPv4", 00:14:35.008 "traddr": "10.0.0.1", 00:14:35.008 "trsvcid": "54062" 00:14:35.008 }, 00:14:35.008 "auth": { 00:14:35.008 "state": "completed", 00:14:35.008 "digest": "sha384", 00:14:35.008 "dhgroup": "null" 00:14:35.008 } 00:14:35.008 } 00:14:35.008 ]' 00:14:35.008 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.008 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:35.008 08:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.267 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:35.267 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.268 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.268 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.268 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.527 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:14:35.527 08:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:14:36.095 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.095 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:36.095 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.095 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.095 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.095 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.095 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:36.095 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:36.355 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:36.355 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.355 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:36.355 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:36.355 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:36.355 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.355 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.355 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.355 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.355 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.355 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.355 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.355 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.923 00:14:36.923 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.923 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.923 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.182 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.182 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.182 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.182 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.182 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.182 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.182 { 00:14:37.182 "cntlid": 53, 00:14:37.182 "qid": 0, 00:14:37.182 "state": "enabled", 00:14:37.182 "thread": "nvmf_tgt_poll_group_000", 00:14:37.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:37.182 "listen_address": { 00:14:37.182 "trtype": "TCP", 00:14:37.182 "adrfam": "IPv4", 00:14:37.182 "traddr": "10.0.0.3", 00:14:37.182 "trsvcid": "4420" 00:14:37.182 }, 00:14:37.182 "peer_address": { 00:14:37.182 "trtype": "TCP", 00:14:37.182 "adrfam": "IPv4", 00:14:37.182 "traddr": "10.0.0.1", 00:14:37.182 "trsvcid": "44364" 00:14:37.182 }, 00:14:37.182 "auth": { 00:14:37.182 "state": "completed", 00:14:37.182 "digest": "sha384", 00:14:37.182 "dhgroup": "null" 00:14:37.182 } 00:14:37.182 } 00:14:37.182 ]' 00:14:37.182 08:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.183 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:37.183 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.183 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:37.183 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.183 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.183 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.183 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.441 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:14:37.441 08:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:14:38.383 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.383 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:38.383 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.383 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.383 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.383 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.383 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:38.383 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:38.642 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:38.643 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.643 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:38.643 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:38.643 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:38.643 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.643 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:14:38.643 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.643 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.643 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.643 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:38.643 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:38.643 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:38.901 00:14:38.901 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.901 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.901 08:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.160 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.160 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.160 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.160 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.160 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.160 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.160 { 00:14:39.160 "cntlid": 55, 00:14:39.160 "qid": 0, 00:14:39.160 "state": "enabled", 00:14:39.160 "thread": "nvmf_tgt_poll_group_000", 00:14:39.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:39.160 "listen_address": { 00:14:39.160 "trtype": "TCP", 00:14:39.160 "adrfam": "IPv4", 00:14:39.160 "traddr": "10.0.0.3", 00:14:39.160 "trsvcid": "4420" 00:14:39.160 }, 00:14:39.160 "peer_address": { 00:14:39.160 "trtype": "TCP", 00:14:39.160 "adrfam": "IPv4", 00:14:39.160 "traddr": "10.0.0.1", 00:14:39.160 "trsvcid": "44388" 00:14:39.160 }, 00:14:39.160 "auth": { 00:14:39.160 "state": "completed", 00:14:39.160 "digest": "sha384", 00:14:39.160 "dhgroup": "null" 00:14:39.160 } 00:14:39.160 } 00:14:39.160 ]' 00:14:39.160 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.419 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:39.419 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.419 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:39.419 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.419 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.419 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.419 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.679 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:14:39.679 08:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:14:40.248 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.248 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:40.248 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.248 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.248 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.248 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:40.248 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.248 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:40.248 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:40.507 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:40.507 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.507 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:40.507 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:40.507 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:40.507 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.507 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.507 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.507 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.507 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.507 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.507 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.507 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.075 00:14:41.075 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.076 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.076 08:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.335 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.335 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.335 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.335 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.335 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.335 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.335 { 00:14:41.335 "cntlid": 57, 00:14:41.335 "qid": 0, 00:14:41.335 "state": "enabled", 00:14:41.335 "thread": "nvmf_tgt_poll_group_000", 00:14:41.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:41.335 "listen_address": { 00:14:41.335 "trtype": "TCP", 00:14:41.335 "adrfam": "IPv4", 00:14:41.335 "traddr": "10.0.0.3", 00:14:41.335 "trsvcid": "4420" 00:14:41.335 }, 00:14:41.335 "peer_address": { 00:14:41.335 "trtype": "TCP", 00:14:41.335 "adrfam": "IPv4", 00:14:41.335 "traddr": "10.0.0.1", 00:14:41.335 "trsvcid": "44408" 00:14:41.335 }, 00:14:41.335 "auth": { 00:14:41.335 "state": "completed", 00:14:41.335 "digest": "sha384", 00:14:41.335 "dhgroup": "ffdhe2048" 00:14:41.335 } 00:14:41.335 } 00:14:41.335 ]' 00:14:41.335 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.335 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:41.335 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.335 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:41.335 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.335 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.335 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.335 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.593 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:41.593 08:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:42.159 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.159 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:42.159 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.159 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.159 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.159 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.159 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:42.159 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:42.727 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:42.727 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.727 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:42.727 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:42.727 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:42.727 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.727 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.727 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.727 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.727 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.727 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.727 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.727 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.985 00:14:42.985 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.985 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.985 08:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.244 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.244 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.244 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.244 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.244 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.244 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.244 { 00:14:43.244 "cntlid": 59, 00:14:43.244 "qid": 0, 00:14:43.244 "state": "enabled", 00:14:43.244 "thread": "nvmf_tgt_poll_group_000", 00:14:43.244 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:43.244 "listen_address": { 00:14:43.244 "trtype": "TCP", 00:14:43.244 "adrfam": "IPv4", 00:14:43.244 "traddr": "10.0.0.3", 00:14:43.244 "trsvcid": "4420" 00:14:43.244 }, 00:14:43.244 "peer_address": { 00:14:43.244 "trtype": "TCP", 00:14:43.244 "adrfam": "IPv4", 00:14:43.244 "traddr": "10.0.0.1", 00:14:43.244 "trsvcid": "44434" 00:14:43.244 }, 00:14:43.244 "auth": { 00:14:43.244 "state": "completed", 00:14:43.244 "digest": "sha384", 00:14:43.244 "dhgroup": "ffdhe2048" 00:14:43.244 } 00:14:43.244 } 00:14:43.244 ]' 00:14:43.244 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.244 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:43.244 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.244 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:43.244 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.244 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.244 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.244 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.812 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:14:43.812 08:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:14:44.425 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.426 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:44.426 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.426 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.426 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.426 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.426 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:44.426 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:44.684 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:44.684 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.684 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:44.684 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:44.684 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:44.684 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.684 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.684 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.684 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.684 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.684 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.684 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.684 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.943 00:14:44.943 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.943 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.943 08:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.202 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.202 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.202 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.202 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.202 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.202 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.202 { 00:14:45.202 "cntlid": 61, 00:14:45.202 "qid": 0, 00:14:45.202 "state": "enabled", 00:14:45.202 "thread": "nvmf_tgt_poll_group_000", 00:14:45.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:45.202 "listen_address": { 00:14:45.202 "trtype": "TCP", 00:14:45.202 "adrfam": "IPv4", 00:14:45.202 "traddr": "10.0.0.3", 00:14:45.202 "trsvcid": "4420" 00:14:45.202 }, 00:14:45.202 "peer_address": { 00:14:45.202 "trtype": "TCP", 00:14:45.202 "adrfam": "IPv4", 00:14:45.202 "traddr": "10.0.0.1", 00:14:45.202 "trsvcid": "44476" 00:14:45.202 }, 00:14:45.202 "auth": { 00:14:45.202 "state": "completed", 00:14:45.202 "digest": "sha384", 00:14:45.202 "dhgroup": "ffdhe2048" 00:14:45.202 } 00:14:45.202 } 00:14:45.202 ]' 00:14:45.202 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.202 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:45.202 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.461 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:45.461 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.461 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.461 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.461 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.720 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:14:45.720 08:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:14:46.287 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.287 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:46.287 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.287 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.287 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.287 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.287 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:46.287 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:46.546 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:46.546 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.546 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:46.546 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:46.546 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:46.546 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.546 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:14:46.546 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.546 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.546 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.546 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:46.546 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.546 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:46.805 00:14:47.064 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.064 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.064 08:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.323 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.323 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.323 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.323 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.323 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.323 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.323 { 00:14:47.323 "cntlid": 63, 00:14:47.323 "qid": 0, 00:14:47.323 "state": "enabled", 00:14:47.323 "thread": "nvmf_tgt_poll_group_000", 00:14:47.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:47.323 "listen_address": { 00:14:47.323 "trtype": "TCP", 00:14:47.323 "adrfam": "IPv4", 00:14:47.323 "traddr": "10.0.0.3", 00:14:47.323 "trsvcid": "4420" 00:14:47.323 }, 00:14:47.323 "peer_address": { 00:14:47.323 "trtype": "TCP", 00:14:47.323 "adrfam": "IPv4", 00:14:47.323 "traddr": "10.0.0.1", 00:14:47.323 "trsvcid": "37922" 00:14:47.323 }, 00:14:47.323 "auth": { 00:14:47.323 "state": "completed", 00:14:47.323 "digest": "sha384", 00:14:47.323 "dhgroup": "ffdhe2048" 00:14:47.323 } 00:14:47.323 } 00:14:47.323 ]' 00:14:47.323 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.323 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:47.323 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.323 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:47.323 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.323 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.323 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.323 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.582 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:14:47.582 08:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:14:48.151 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.410 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.668 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.668 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.668 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.668 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.927 00:14:48.927 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.927 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.927 08:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.187 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.187 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.187 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.187 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.187 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.187 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.187 { 00:14:49.187 "cntlid": 65, 00:14:49.187 "qid": 0, 00:14:49.187 "state": "enabled", 00:14:49.187 "thread": "nvmf_tgt_poll_group_000", 00:14:49.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:49.187 "listen_address": { 00:14:49.187 "trtype": "TCP", 00:14:49.187 "adrfam": "IPv4", 00:14:49.187 "traddr": "10.0.0.3", 00:14:49.187 "trsvcid": "4420" 00:14:49.187 }, 00:14:49.187 "peer_address": { 00:14:49.187 "trtype": "TCP", 00:14:49.187 "adrfam": "IPv4", 00:14:49.187 "traddr": "10.0.0.1", 00:14:49.187 "trsvcid": "37956" 00:14:49.187 }, 00:14:49.187 "auth": { 00:14:49.187 "state": "completed", 00:14:49.187 "digest": "sha384", 00:14:49.187 "dhgroup": "ffdhe3072" 00:14:49.187 } 00:14:49.187 } 00:14:49.187 ]' 00:14:49.187 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.187 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:49.187 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.187 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:49.187 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.446 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.446 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.446 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.706 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:49.706 08:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:50.274 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.274 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:50.274 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.274 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.274 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.274 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.275 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:50.275 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:50.534 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:14:50.534 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.534 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:50.534 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:50.534 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:50.534 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.534 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.534 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.534 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.534 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.534 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.534 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.534 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.102 00:14:51.102 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.102 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.102 08:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.361 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.361 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.361 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.361 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.361 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.361 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.361 { 00:14:51.361 "cntlid": 67, 00:14:51.361 "qid": 0, 00:14:51.361 "state": "enabled", 00:14:51.361 "thread": "nvmf_tgt_poll_group_000", 00:14:51.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:51.361 "listen_address": { 00:14:51.361 "trtype": "TCP", 00:14:51.361 "adrfam": "IPv4", 00:14:51.361 "traddr": "10.0.0.3", 00:14:51.361 "trsvcid": "4420" 00:14:51.361 }, 00:14:51.361 "peer_address": { 00:14:51.361 "trtype": "TCP", 00:14:51.361 "adrfam": "IPv4", 00:14:51.361 "traddr": "10.0.0.1", 00:14:51.361 "trsvcid": "37982" 00:14:51.361 }, 00:14:51.361 "auth": { 00:14:51.361 "state": "completed", 00:14:51.361 "digest": "sha384", 00:14:51.361 "dhgroup": "ffdhe3072" 00:14:51.361 } 00:14:51.361 } 00:14:51.361 ]' 00:14:51.361 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.361 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.361 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.361 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:51.361 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.361 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.361 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.361 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.620 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:14:51.620 08:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:14:52.557 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.557 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:52.557 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.557 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.557 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.557 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.557 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:52.557 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:52.557 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:14:52.557 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.557 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:52.557 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:52.557 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:52.557 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.558 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.558 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.558 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.558 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.558 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.558 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.558 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.125 00:14:53.126 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.126 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.126 08:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.384 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.385 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.385 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.385 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.385 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.385 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.385 { 00:14:53.385 "cntlid": 69, 00:14:53.385 "qid": 0, 00:14:53.385 "state": "enabled", 00:14:53.385 "thread": "nvmf_tgt_poll_group_000", 00:14:53.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:53.385 "listen_address": { 00:14:53.385 "trtype": "TCP", 00:14:53.385 "adrfam": "IPv4", 00:14:53.385 "traddr": "10.0.0.3", 00:14:53.385 "trsvcid": "4420" 00:14:53.385 }, 00:14:53.385 "peer_address": { 00:14:53.385 "trtype": "TCP", 00:14:53.385 "adrfam": "IPv4", 00:14:53.385 "traddr": "10.0.0.1", 00:14:53.385 "trsvcid": "38000" 00:14:53.385 }, 00:14:53.385 "auth": { 00:14:53.385 "state": "completed", 00:14:53.385 "digest": "sha384", 00:14:53.385 "dhgroup": "ffdhe3072" 00:14:53.385 } 00:14:53.385 } 00:14:53.385 ]' 00:14:53.385 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.385 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:53.385 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.385 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:53.385 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.385 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.385 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.385 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.643 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:14:53.643 08:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:14:54.655 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.655 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:54.655 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.655 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.655 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.655 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.655 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:54.655 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:54.914 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:14:54.914 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.914 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:54.914 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:54.914 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:54.914 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.914 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:14:54.914 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.914 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.914 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.914 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:54.914 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:54.914 08:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:55.173 00:14:55.173 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.173 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.173 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.431 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.431 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.431 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.431 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.431 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.431 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.431 { 00:14:55.431 "cntlid": 71, 00:14:55.431 "qid": 0, 00:14:55.431 "state": "enabled", 00:14:55.431 "thread": "nvmf_tgt_poll_group_000", 00:14:55.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:55.431 "listen_address": { 00:14:55.431 "trtype": "TCP", 00:14:55.431 "adrfam": "IPv4", 00:14:55.431 "traddr": "10.0.0.3", 00:14:55.431 "trsvcid": "4420" 00:14:55.431 }, 00:14:55.431 "peer_address": { 00:14:55.431 "trtype": "TCP", 00:14:55.431 "adrfam": "IPv4", 00:14:55.431 "traddr": "10.0.0.1", 00:14:55.431 "trsvcid": "38026" 00:14:55.431 }, 00:14:55.431 "auth": { 00:14:55.431 "state": "completed", 00:14:55.431 "digest": "sha384", 00:14:55.431 "dhgroup": "ffdhe3072" 00:14:55.431 } 00:14:55.431 } 00:14:55.431 ]' 00:14:55.431 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.690 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.690 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.690 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:55.690 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.690 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.690 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.690 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.948 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:14:55.948 08:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:14:56.515 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.515 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:56.515 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.515 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.773 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.773 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:56.773 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.773 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:56.773 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:57.032 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:57.032 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.032 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:57.032 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:57.032 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:57.032 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.032 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.032 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.032 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.032 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.032 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.032 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.032 08:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.291 00:14:57.291 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.291 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.291 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.549 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.549 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.549 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.549 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.549 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.549 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.549 { 00:14:57.549 "cntlid": 73, 00:14:57.549 "qid": 0, 00:14:57.549 "state": "enabled", 00:14:57.549 "thread": "nvmf_tgt_poll_group_000", 00:14:57.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:57.549 "listen_address": { 00:14:57.549 "trtype": "TCP", 00:14:57.549 "adrfam": "IPv4", 00:14:57.549 "traddr": "10.0.0.3", 00:14:57.549 "trsvcid": "4420" 00:14:57.549 }, 00:14:57.549 "peer_address": { 00:14:57.549 "trtype": "TCP", 00:14:57.549 "adrfam": "IPv4", 00:14:57.549 "traddr": "10.0.0.1", 00:14:57.549 "trsvcid": "52224" 00:14:57.549 }, 00:14:57.549 "auth": { 00:14:57.549 "state": "completed", 00:14:57.549 "digest": "sha384", 00:14:57.549 "dhgroup": "ffdhe4096" 00:14:57.549 } 00:14:57.549 } 00:14:57.549 ]' 00:14:57.549 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.549 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.549 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.808 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:57.808 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.808 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.808 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.808 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.066 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:58.066 08:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:14:58.633 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.892 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:14:58.892 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.892 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.892 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.892 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.892 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:58.892 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:59.152 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:59.152 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.152 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:59.152 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:59.152 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:59.152 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.152 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.152 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.152 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.152 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.152 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.152 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.152 08:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.411 00:14:59.411 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.411 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.411 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.670 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.670 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.670 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.670 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.670 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.670 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.670 { 00:14:59.670 "cntlid": 75, 00:14:59.670 "qid": 0, 00:14:59.670 "state": "enabled", 00:14:59.670 "thread": "nvmf_tgt_poll_group_000", 00:14:59.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:14:59.670 "listen_address": { 00:14:59.670 "trtype": "TCP", 00:14:59.670 "adrfam": "IPv4", 00:14:59.670 "traddr": "10.0.0.3", 00:14:59.670 "trsvcid": "4420" 00:14:59.670 }, 00:14:59.670 "peer_address": { 00:14:59.670 "trtype": "TCP", 00:14:59.670 "adrfam": "IPv4", 00:14:59.670 "traddr": "10.0.0.1", 00:14:59.670 "trsvcid": "52252" 00:14:59.670 }, 00:14:59.670 "auth": { 00:14:59.670 "state": "completed", 00:14:59.670 "digest": "sha384", 00:14:59.670 "dhgroup": "ffdhe4096" 00:14:59.670 } 00:14:59.670 } 00:14:59.670 ]' 00:14:59.670 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.929 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.929 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.929 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:59.929 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.929 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.929 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.929 08:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.188 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:15:00.188 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:15:00.756 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.756 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:00.756 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.756 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.756 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.756 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.756 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:00.756 08:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:01.324 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:01.324 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.324 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:01.324 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:01.324 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:01.324 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.324 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.324 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.324 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.324 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.324 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.324 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.324 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.582 00:15:01.582 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.582 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.582 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.842 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.842 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.842 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.842 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.842 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.842 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.842 { 00:15:01.842 "cntlid": 77, 00:15:01.842 "qid": 0, 00:15:01.842 "state": "enabled", 00:15:01.842 "thread": "nvmf_tgt_poll_group_000", 00:15:01.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:01.842 "listen_address": { 00:15:01.842 "trtype": "TCP", 00:15:01.842 "adrfam": "IPv4", 00:15:01.842 "traddr": "10.0.0.3", 00:15:01.842 "trsvcid": "4420" 00:15:01.842 }, 00:15:01.842 "peer_address": { 00:15:01.842 "trtype": "TCP", 00:15:01.842 "adrfam": "IPv4", 00:15:01.842 "traddr": "10.0.0.1", 00:15:01.842 "trsvcid": "52282" 00:15:01.842 }, 00:15:01.842 "auth": { 00:15:01.842 "state": "completed", 00:15:01.842 "digest": "sha384", 00:15:01.842 "dhgroup": "ffdhe4096" 00:15:01.842 } 00:15:01.842 } 00:15:01.842 ]' 00:15:01.842 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.842 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:01.842 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.842 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:01.842 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.102 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.102 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.102 08:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.360 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:15:02.360 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:15:02.927 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.927 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:02.927 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.927 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.927 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.927 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.927 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:02.927 08:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:03.186 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:03.186 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.186 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:03.186 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:03.186 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:03.186 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.186 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:15:03.186 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.186 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.186 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.186 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:03.186 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.186 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.753 00:15:03.753 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.753 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.753 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.024 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.024 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.024 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.024 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.024 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.024 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.024 { 00:15:04.024 "cntlid": 79, 00:15:04.024 "qid": 0, 00:15:04.024 "state": "enabled", 00:15:04.024 "thread": "nvmf_tgt_poll_group_000", 00:15:04.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:04.024 "listen_address": { 00:15:04.024 "trtype": "TCP", 00:15:04.024 "adrfam": "IPv4", 00:15:04.024 "traddr": "10.0.0.3", 00:15:04.024 "trsvcid": "4420" 00:15:04.024 }, 00:15:04.024 "peer_address": { 00:15:04.024 "trtype": "TCP", 00:15:04.024 "adrfam": "IPv4", 00:15:04.024 "traddr": "10.0.0.1", 00:15:04.024 "trsvcid": "52320" 00:15:04.024 }, 00:15:04.024 "auth": { 00:15:04.024 "state": "completed", 00:15:04.024 "digest": "sha384", 00:15:04.024 "dhgroup": "ffdhe4096" 00:15:04.024 } 00:15:04.024 } 00:15:04.024 ]' 00:15:04.024 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.024 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.024 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.024 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:04.024 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.024 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.024 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.025 08:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.283 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:15:04.283 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:15:05.220 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.220 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:05.220 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.220 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.220 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.220 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:05.220 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.220 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:05.220 08:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:05.220 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:05.220 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.220 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:05.220 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:05.220 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:05.220 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.220 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.220 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.220 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.220 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.220 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.220 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.220 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.787 00:15:05.787 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.787 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.787 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.046 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.046 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.046 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.046 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.046 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.046 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.046 { 00:15:06.046 "cntlid": 81, 00:15:06.046 "qid": 0, 00:15:06.046 "state": "enabled", 00:15:06.046 "thread": "nvmf_tgt_poll_group_000", 00:15:06.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:06.046 "listen_address": { 00:15:06.046 "trtype": "TCP", 00:15:06.046 "adrfam": "IPv4", 00:15:06.046 "traddr": "10.0.0.3", 00:15:06.046 "trsvcid": "4420" 00:15:06.046 }, 00:15:06.046 "peer_address": { 00:15:06.046 "trtype": "TCP", 00:15:06.046 "adrfam": "IPv4", 00:15:06.046 "traddr": "10.0.0.1", 00:15:06.046 "trsvcid": "52340" 00:15:06.046 }, 00:15:06.046 "auth": { 00:15:06.046 "state": "completed", 00:15:06.046 "digest": "sha384", 00:15:06.046 "dhgroup": "ffdhe6144" 00:15:06.046 } 00:15:06.046 } 00:15:06.046 ]' 00:15:06.046 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.046 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.046 08:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.046 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:06.046 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.305 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.305 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.305 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.564 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:15:06.564 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:15:07.132 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.132 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:07.132 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.133 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.133 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.133 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.133 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:07.133 08:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:07.392 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:07.392 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.392 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:07.392 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:07.392 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:07.392 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.392 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.392 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.392 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.392 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.392 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.392 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.392 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.962 00:15:07.962 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.962 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.962 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.221 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.221 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.221 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.221 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.221 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.221 08:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.221 { 00:15:08.221 "cntlid": 83, 00:15:08.221 "qid": 0, 00:15:08.221 "state": "enabled", 00:15:08.221 "thread": "nvmf_tgt_poll_group_000", 00:15:08.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:08.221 "listen_address": { 00:15:08.221 "trtype": "TCP", 00:15:08.221 "adrfam": "IPv4", 00:15:08.221 "traddr": "10.0.0.3", 00:15:08.221 "trsvcid": "4420" 00:15:08.221 }, 00:15:08.221 "peer_address": { 00:15:08.221 "trtype": "TCP", 00:15:08.221 "adrfam": "IPv4", 00:15:08.221 "traddr": "10.0.0.1", 00:15:08.221 "trsvcid": "35588" 00:15:08.221 }, 00:15:08.221 "auth": { 00:15:08.221 "state": "completed", 00:15:08.221 "digest": "sha384", 00:15:08.221 "dhgroup": "ffdhe6144" 00:15:08.221 } 00:15:08.221 } 00:15:08.221 ]' 00:15:08.221 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.221 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.221 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.221 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:08.221 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.221 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.221 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.221 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.480 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:15:08.480 08:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.418 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.991 00:15:09.991 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.991 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.991 08:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.250 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.250 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.250 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.250 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.250 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.250 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.250 { 00:15:10.250 "cntlid": 85, 00:15:10.250 "qid": 0, 00:15:10.250 "state": "enabled", 00:15:10.250 "thread": "nvmf_tgt_poll_group_000", 00:15:10.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:10.250 "listen_address": { 00:15:10.250 "trtype": "TCP", 00:15:10.250 "adrfam": "IPv4", 00:15:10.250 "traddr": "10.0.0.3", 00:15:10.250 "trsvcid": "4420" 00:15:10.250 }, 00:15:10.250 "peer_address": { 00:15:10.250 "trtype": "TCP", 00:15:10.250 "adrfam": "IPv4", 00:15:10.250 "traddr": "10.0.0.1", 00:15:10.250 "trsvcid": "35606" 00:15:10.250 }, 00:15:10.250 "auth": { 00:15:10.250 "state": "completed", 00:15:10.250 "digest": "sha384", 00:15:10.250 "dhgroup": "ffdhe6144" 00:15:10.250 } 00:15:10.250 } 00:15:10.250 ]' 00:15:10.250 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.250 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.250 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.510 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:10.510 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.510 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.510 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.510 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.770 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:15:10.770 08:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:15:11.338 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.338 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:11.338 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.338 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.338 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.338 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.338 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:11.338 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:11.596 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:11.596 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.596 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:11.596 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:11.596 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:11.597 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.597 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:15:11.597 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.597 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.597 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.597 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:11.597 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.597 08:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:12.164 00:15:12.164 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.164 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.164 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.422 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.422 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.422 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.422 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.422 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.422 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.422 { 00:15:12.422 "cntlid": 87, 00:15:12.422 "qid": 0, 00:15:12.422 "state": "enabled", 00:15:12.423 "thread": "nvmf_tgt_poll_group_000", 00:15:12.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:12.423 "listen_address": { 00:15:12.423 "trtype": "TCP", 00:15:12.423 "adrfam": "IPv4", 00:15:12.423 "traddr": "10.0.0.3", 00:15:12.423 "trsvcid": "4420" 00:15:12.423 }, 00:15:12.423 "peer_address": { 00:15:12.423 "trtype": "TCP", 00:15:12.423 "adrfam": "IPv4", 00:15:12.423 "traddr": "10.0.0.1", 00:15:12.423 "trsvcid": "35650" 00:15:12.423 }, 00:15:12.423 "auth": { 00:15:12.423 "state": "completed", 00:15:12.423 "digest": "sha384", 00:15:12.423 "dhgroup": "ffdhe6144" 00:15:12.423 } 00:15:12.423 } 00:15:12.423 ]' 00:15:12.423 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.423 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.423 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.682 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:12.682 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.682 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.682 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.682 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.940 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:15:12.940 08:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:15:13.508 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.508 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:13.508 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.508 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.508 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.508 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.508 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.508 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:13.508 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:13.767 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:13.767 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.767 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:13.767 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:13.767 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:13.767 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.767 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.767 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.767 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.767 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.767 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.767 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.767 08:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.705 00:15:14.705 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.705 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.705 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.705 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.705 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.705 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.705 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.964 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.964 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.964 { 00:15:14.964 "cntlid": 89, 00:15:14.964 "qid": 0, 00:15:14.964 "state": "enabled", 00:15:14.964 "thread": "nvmf_tgt_poll_group_000", 00:15:14.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:14.964 "listen_address": { 00:15:14.964 "trtype": "TCP", 00:15:14.964 "adrfam": "IPv4", 00:15:14.964 "traddr": "10.0.0.3", 00:15:14.964 "trsvcid": "4420" 00:15:14.964 }, 00:15:14.964 "peer_address": { 00:15:14.964 "trtype": "TCP", 00:15:14.964 "adrfam": "IPv4", 00:15:14.964 "traddr": "10.0.0.1", 00:15:14.964 "trsvcid": "35688" 00:15:14.964 }, 00:15:14.964 "auth": { 00:15:14.964 "state": "completed", 00:15:14.964 "digest": "sha384", 00:15:14.964 "dhgroup": "ffdhe8192" 00:15:14.964 } 00:15:14.964 } 00:15:14.964 ]' 00:15:14.964 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.964 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.964 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.964 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:14.964 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.964 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.964 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.964 08:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.222 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:15:15.222 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:15:16.158 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.158 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:16.158 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.158 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.158 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.158 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.158 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:16.158 08:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:16.158 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:16.158 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.158 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:16.158 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:16.158 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:16.158 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.158 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.158 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.158 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.158 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.158 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.158 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.158 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.093 00:15:17.093 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.093 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.093 08:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.093 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.093 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.093 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.093 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.352 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.352 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.352 { 00:15:17.352 "cntlid": 91, 00:15:17.352 "qid": 0, 00:15:17.352 "state": "enabled", 00:15:17.352 "thread": "nvmf_tgt_poll_group_000", 00:15:17.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:17.352 "listen_address": { 00:15:17.352 "trtype": "TCP", 00:15:17.352 "adrfam": "IPv4", 00:15:17.352 "traddr": "10.0.0.3", 00:15:17.352 "trsvcid": "4420" 00:15:17.352 }, 00:15:17.352 "peer_address": { 00:15:17.352 "trtype": "TCP", 00:15:17.352 "adrfam": "IPv4", 00:15:17.352 "traddr": "10.0.0.1", 00:15:17.352 "trsvcid": "55682" 00:15:17.352 }, 00:15:17.352 "auth": { 00:15:17.352 "state": "completed", 00:15:17.352 "digest": "sha384", 00:15:17.352 "dhgroup": "ffdhe8192" 00:15:17.352 } 00:15:17.352 } 00:15:17.352 ]' 00:15:17.352 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.352 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.352 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.352 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:17.352 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.352 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.352 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.352 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.611 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:15:17.611 08:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:15:18.178 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.178 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:18.178 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.178 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.178 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.178 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.178 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:18.178 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:18.746 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:18.746 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.746 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:18.746 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:18.746 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:18.746 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.746 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.746 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.746 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.746 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.746 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.746 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.746 08:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.314 00:15:19.314 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.314 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.314 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.573 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.573 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.573 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.573 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.573 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.573 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.573 { 00:15:19.573 "cntlid": 93, 00:15:19.573 "qid": 0, 00:15:19.573 "state": "enabled", 00:15:19.573 "thread": "nvmf_tgt_poll_group_000", 00:15:19.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:19.574 "listen_address": { 00:15:19.574 "trtype": "TCP", 00:15:19.574 "adrfam": "IPv4", 00:15:19.574 "traddr": "10.0.0.3", 00:15:19.574 "trsvcid": "4420" 00:15:19.574 }, 00:15:19.574 "peer_address": { 00:15:19.574 "trtype": "TCP", 00:15:19.574 "adrfam": "IPv4", 00:15:19.574 "traddr": "10.0.0.1", 00:15:19.574 "trsvcid": "55702" 00:15:19.574 }, 00:15:19.574 "auth": { 00:15:19.574 "state": "completed", 00:15:19.574 "digest": "sha384", 00:15:19.574 "dhgroup": "ffdhe8192" 00:15:19.574 } 00:15:19.574 } 00:15:19.574 ]' 00:15:19.574 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.574 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.574 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.574 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:19.574 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.574 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.574 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.574 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.141 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:15:20.141 08:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:15:20.708 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.708 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:20.708 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.708 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.708 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.708 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.708 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:20.708 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:20.967 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:20.967 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.967 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:20.967 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:20.967 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:20.967 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.967 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:15:20.967 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.967 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.967 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.967 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:20.967 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.967 08:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.533 00:15:21.533 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.533 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.533 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.101 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.101 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.101 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.101 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.101 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.101 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.101 { 00:15:22.101 "cntlid": 95, 00:15:22.101 "qid": 0, 00:15:22.101 "state": "enabled", 00:15:22.101 "thread": "nvmf_tgt_poll_group_000", 00:15:22.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:22.101 "listen_address": { 00:15:22.101 "trtype": "TCP", 00:15:22.101 "adrfam": "IPv4", 00:15:22.101 "traddr": "10.0.0.3", 00:15:22.101 "trsvcid": "4420" 00:15:22.101 }, 00:15:22.101 "peer_address": { 00:15:22.101 "trtype": "TCP", 00:15:22.101 "adrfam": "IPv4", 00:15:22.101 "traddr": "10.0.0.1", 00:15:22.101 "trsvcid": "55728" 00:15:22.101 }, 00:15:22.101 "auth": { 00:15:22.101 "state": "completed", 00:15:22.101 "digest": "sha384", 00:15:22.101 "dhgroup": "ffdhe8192" 00:15:22.101 } 00:15:22.101 } 00:15:22.101 ]' 00:15:22.101 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.101 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.101 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.101 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:22.101 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.101 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.101 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.101 08:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.360 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:15:22.360 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:15:23.294 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.294 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:23.294 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.294 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.294 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.294 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:23.294 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:23.294 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.295 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:23.295 08:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:23.295 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:23.295 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.295 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:23.295 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:23.295 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:23.295 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.295 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.295 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.295 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.295 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.295 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.295 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.295 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.553 00:15:23.811 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.811 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.811 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.071 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.071 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.071 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.071 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.071 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.071 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.071 { 00:15:24.071 "cntlid": 97, 00:15:24.071 "qid": 0, 00:15:24.071 "state": "enabled", 00:15:24.071 "thread": "nvmf_tgt_poll_group_000", 00:15:24.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:24.071 "listen_address": { 00:15:24.071 "trtype": "TCP", 00:15:24.071 "adrfam": "IPv4", 00:15:24.071 "traddr": "10.0.0.3", 00:15:24.071 "trsvcid": "4420" 00:15:24.071 }, 00:15:24.071 "peer_address": { 00:15:24.071 "trtype": "TCP", 00:15:24.071 "adrfam": "IPv4", 00:15:24.071 "traddr": "10.0.0.1", 00:15:24.071 "trsvcid": "55754" 00:15:24.071 }, 00:15:24.071 "auth": { 00:15:24.071 "state": "completed", 00:15:24.071 "digest": "sha512", 00:15:24.071 "dhgroup": "null" 00:15:24.071 } 00:15:24.071 } 00:15:24.071 ]' 00:15:24.071 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.071 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.071 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.071 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:24.071 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.071 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.071 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.071 08:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.333 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:15:24.333 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:15:25.266 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.266 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:25.266 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.266 08:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.266 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.266 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.266 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:25.266 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:25.523 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:25.523 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.523 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:25.523 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:25.523 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:25.523 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.523 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.523 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.523 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.523 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.523 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.523 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.523 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.781 00:15:25.781 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.781 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.781 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.039 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.039 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.039 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.039 08:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.039 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.039 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.039 { 00:15:26.039 "cntlid": 99, 00:15:26.039 "qid": 0, 00:15:26.039 "state": "enabled", 00:15:26.039 "thread": "nvmf_tgt_poll_group_000", 00:15:26.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:26.039 "listen_address": { 00:15:26.039 "trtype": "TCP", 00:15:26.039 "adrfam": "IPv4", 00:15:26.039 "traddr": "10.0.0.3", 00:15:26.039 "trsvcid": "4420" 00:15:26.039 }, 00:15:26.039 "peer_address": { 00:15:26.039 "trtype": "TCP", 00:15:26.039 "adrfam": "IPv4", 00:15:26.039 "traddr": "10.0.0.1", 00:15:26.039 "trsvcid": "55772" 00:15:26.039 }, 00:15:26.039 "auth": { 00:15:26.039 "state": "completed", 00:15:26.039 "digest": "sha512", 00:15:26.039 "dhgroup": "null" 00:15:26.039 } 00:15:26.039 } 00:15:26.039 ]' 00:15:26.039 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.298 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:26.298 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.298 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:26.298 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.298 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.298 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.298 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.556 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:15:26.556 08:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:15:27.493 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.493 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:27.493 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.493 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.493 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.493 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.493 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:27.493 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:27.754 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:27.754 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.754 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:27.754 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:27.754 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:27.754 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.754 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.754 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.754 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.754 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.754 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.754 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.754 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.013 00:15:28.013 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.013 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.013 08:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.272 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.272 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.272 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.272 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.272 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.272 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.272 { 00:15:28.272 "cntlid": 101, 00:15:28.272 "qid": 0, 00:15:28.272 "state": "enabled", 00:15:28.272 "thread": "nvmf_tgt_poll_group_000", 00:15:28.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:28.272 "listen_address": { 00:15:28.272 "trtype": "TCP", 00:15:28.272 "adrfam": "IPv4", 00:15:28.272 "traddr": "10.0.0.3", 00:15:28.272 "trsvcid": "4420" 00:15:28.272 }, 00:15:28.272 "peer_address": { 00:15:28.272 "trtype": "TCP", 00:15:28.272 "adrfam": "IPv4", 00:15:28.272 "traddr": "10.0.0.1", 00:15:28.272 "trsvcid": "51594" 00:15:28.272 }, 00:15:28.272 "auth": { 00:15:28.272 "state": "completed", 00:15:28.272 "digest": "sha512", 00:15:28.272 "dhgroup": "null" 00:15:28.272 } 00:15:28.272 } 00:15:28.272 ]' 00:15:28.272 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.530 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:28.530 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.530 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:28.530 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.530 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.530 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.530 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.789 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:15:28.789 08:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:15:29.726 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.726 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:29.726 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.726 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.726 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.726 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.726 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:29.726 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:29.985 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:29.985 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.985 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:29.985 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:29.985 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:29.985 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.985 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:15:29.985 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.985 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.985 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.985 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:29.985 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:29.985 08:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.244 00:15:30.244 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.244 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.244 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.503 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.503 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.503 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.503 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.503 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.503 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.503 { 00:15:30.503 "cntlid": 103, 00:15:30.503 "qid": 0, 00:15:30.503 "state": "enabled", 00:15:30.503 "thread": "nvmf_tgt_poll_group_000", 00:15:30.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:30.503 "listen_address": { 00:15:30.503 "trtype": "TCP", 00:15:30.503 "adrfam": "IPv4", 00:15:30.503 "traddr": "10.0.0.3", 00:15:30.503 "trsvcid": "4420" 00:15:30.503 }, 00:15:30.503 "peer_address": { 00:15:30.503 "trtype": "TCP", 00:15:30.503 "adrfam": "IPv4", 00:15:30.503 "traddr": "10.0.0.1", 00:15:30.503 "trsvcid": "51614" 00:15:30.503 }, 00:15:30.503 "auth": { 00:15:30.503 "state": "completed", 00:15:30.503 "digest": "sha512", 00:15:30.503 "dhgroup": "null" 00:15:30.503 } 00:15:30.503 } 00:15:30.503 ]' 00:15:30.503 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.762 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:30.762 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.762 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:30.762 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.762 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.762 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.762 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.022 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:15:31.022 08:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:15:31.590 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.590 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:31.590 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.590 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.590 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.590 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:31.590 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.590 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:31.590 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:31.849 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:31.849 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.849 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:31.849 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:31.849 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:31.849 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.849 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.849 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.849 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.849 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.849 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.849 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.850 08:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.417 00:15:32.417 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.417 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.417 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.675 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.675 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.675 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.675 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.675 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.675 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.675 { 00:15:32.675 "cntlid": 105, 00:15:32.675 "qid": 0, 00:15:32.675 "state": "enabled", 00:15:32.675 "thread": "nvmf_tgt_poll_group_000", 00:15:32.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:32.675 "listen_address": { 00:15:32.675 "trtype": "TCP", 00:15:32.675 "adrfam": "IPv4", 00:15:32.675 "traddr": "10.0.0.3", 00:15:32.675 "trsvcid": "4420" 00:15:32.675 }, 00:15:32.675 "peer_address": { 00:15:32.675 "trtype": "TCP", 00:15:32.675 "adrfam": "IPv4", 00:15:32.675 "traddr": "10.0.0.1", 00:15:32.675 "trsvcid": "51646" 00:15:32.675 }, 00:15:32.675 "auth": { 00:15:32.675 "state": "completed", 00:15:32.675 "digest": "sha512", 00:15:32.675 "dhgroup": "ffdhe2048" 00:15:32.675 } 00:15:32.675 } 00:15:32.675 ]' 00:15:32.675 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.675 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:32.675 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.675 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:32.675 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.675 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.675 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.675 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.933 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:15:32.933 08:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:15:33.869 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.869 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:33.869 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.869 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.869 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.869 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.869 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:33.869 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:34.128 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:34.128 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.128 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:34.128 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:34.128 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:34.128 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.128 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.128 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.128 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.128 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.128 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.128 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.128 08:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.387 00:15:34.387 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.387 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.387 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.646 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.646 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.646 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.646 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.646 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.646 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.646 { 00:15:34.646 "cntlid": 107, 00:15:34.646 "qid": 0, 00:15:34.646 "state": "enabled", 00:15:34.646 "thread": "nvmf_tgt_poll_group_000", 00:15:34.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:34.646 "listen_address": { 00:15:34.646 "trtype": "TCP", 00:15:34.646 "adrfam": "IPv4", 00:15:34.646 "traddr": "10.0.0.3", 00:15:34.646 "trsvcid": "4420" 00:15:34.646 }, 00:15:34.646 "peer_address": { 00:15:34.646 "trtype": "TCP", 00:15:34.646 "adrfam": "IPv4", 00:15:34.646 "traddr": "10.0.0.1", 00:15:34.646 "trsvcid": "51668" 00:15:34.646 }, 00:15:34.646 "auth": { 00:15:34.646 "state": "completed", 00:15:34.646 "digest": "sha512", 00:15:34.646 "dhgroup": "ffdhe2048" 00:15:34.646 } 00:15:34.646 } 00:15:34.646 ]' 00:15:34.646 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.646 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:34.646 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.646 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:34.646 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.646 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.646 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.646 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.256 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:15:35.256 08:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:15:35.824 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.824 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:35.824 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.824 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.824 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.824 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.824 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:35.824 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:36.085 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:36.085 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.085 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:36.085 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:36.085 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.085 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.085 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.085 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.085 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.085 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.085 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.085 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.085 08:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.344 00:15:36.603 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.603 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.603 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.862 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.862 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.862 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.862 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.862 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.862 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.862 { 00:15:36.862 "cntlid": 109, 00:15:36.862 "qid": 0, 00:15:36.862 "state": "enabled", 00:15:36.862 "thread": "nvmf_tgt_poll_group_000", 00:15:36.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:36.862 "listen_address": { 00:15:36.862 "trtype": "TCP", 00:15:36.862 "adrfam": "IPv4", 00:15:36.862 "traddr": "10.0.0.3", 00:15:36.862 "trsvcid": "4420" 00:15:36.862 }, 00:15:36.862 "peer_address": { 00:15:36.862 "trtype": "TCP", 00:15:36.862 "adrfam": "IPv4", 00:15:36.862 "traddr": "10.0.0.1", 00:15:36.862 "trsvcid": "50040" 00:15:36.862 }, 00:15:36.862 "auth": { 00:15:36.862 "state": "completed", 00:15:36.862 "digest": "sha512", 00:15:36.862 "dhgroup": "ffdhe2048" 00:15:36.862 } 00:15:36.862 } 00:15:36.862 ]' 00:15:36.862 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.862 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:36.863 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.863 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:36.863 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.863 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.863 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.863 08:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.121 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:15:37.121 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.061 08:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.320 00:15:38.320 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.320 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.320 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.579 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.579 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.579 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.579 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.579 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.579 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.579 { 00:15:38.579 "cntlid": 111, 00:15:38.579 "qid": 0, 00:15:38.579 "state": "enabled", 00:15:38.579 "thread": "nvmf_tgt_poll_group_000", 00:15:38.579 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:38.579 "listen_address": { 00:15:38.579 "trtype": "TCP", 00:15:38.579 "adrfam": "IPv4", 00:15:38.579 "traddr": "10.0.0.3", 00:15:38.579 "trsvcid": "4420" 00:15:38.579 }, 00:15:38.579 "peer_address": { 00:15:38.579 "trtype": "TCP", 00:15:38.579 "adrfam": "IPv4", 00:15:38.579 "traddr": "10.0.0.1", 00:15:38.579 "trsvcid": "50068" 00:15:38.579 }, 00:15:38.579 "auth": { 00:15:38.579 "state": "completed", 00:15:38.579 "digest": "sha512", 00:15:38.579 "dhgroup": "ffdhe2048" 00:15:38.579 } 00:15:38.579 } 00:15:38.579 ]' 00:15:38.579 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.837 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:38.837 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.837 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:38.837 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.837 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.837 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.837 08:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.094 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:15:39.094 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:15:40.028 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.028 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:40.028 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.028 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.028 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.028 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.028 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.028 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:40.028 08:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:40.596 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:15:40.596 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.596 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:40.596 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:40.596 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:40.596 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.596 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.596 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.596 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.596 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.596 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.596 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.596 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.855 00:15:40.855 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.855 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.855 08:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.115 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.115 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.115 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.115 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.374 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.374 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.374 { 00:15:41.374 "cntlid": 113, 00:15:41.374 "qid": 0, 00:15:41.374 "state": "enabled", 00:15:41.374 "thread": "nvmf_tgt_poll_group_000", 00:15:41.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:41.374 "listen_address": { 00:15:41.374 "trtype": "TCP", 00:15:41.374 "adrfam": "IPv4", 00:15:41.374 "traddr": "10.0.0.3", 00:15:41.374 "trsvcid": "4420" 00:15:41.374 }, 00:15:41.374 "peer_address": { 00:15:41.374 "trtype": "TCP", 00:15:41.374 "adrfam": "IPv4", 00:15:41.374 "traddr": "10.0.0.1", 00:15:41.374 "trsvcid": "50100" 00:15:41.374 }, 00:15:41.374 "auth": { 00:15:41.374 "state": "completed", 00:15:41.374 "digest": "sha512", 00:15:41.374 "dhgroup": "ffdhe3072" 00:15:41.374 } 00:15:41.374 } 00:15:41.374 ]' 00:15:41.374 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.374 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:41.374 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.374 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:41.374 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.374 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.374 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.374 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.634 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:15:41.634 08:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:15:42.572 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.572 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:42.572 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.572 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.572 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.572 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.572 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:42.572 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:42.831 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:15:42.831 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.831 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:42.831 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:42.831 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:42.831 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.831 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.831 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.831 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.831 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.831 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.831 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.832 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.091 00:15:43.091 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.091 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.091 08:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.351 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.351 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.351 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.351 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.351 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.351 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.351 { 00:15:43.351 "cntlid": 115, 00:15:43.351 "qid": 0, 00:15:43.351 "state": "enabled", 00:15:43.351 "thread": "nvmf_tgt_poll_group_000", 00:15:43.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:43.351 "listen_address": { 00:15:43.351 "trtype": "TCP", 00:15:43.351 "adrfam": "IPv4", 00:15:43.351 "traddr": "10.0.0.3", 00:15:43.351 "trsvcid": "4420" 00:15:43.351 }, 00:15:43.351 "peer_address": { 00:15:43.351 "trtype": "TCP", 00:15:43.351 "adrfam": "IPv4", 00:15:43.351 "traddr": "10.0.0.1", 00:15:43.351 "trsvcid": "50116" 00:15:43.351 }, 00:15:43.351 "auth": { 00:15:43.351 "state": "completed", 00:15:43.351 "digest": "sha512", 00:15:43.351 "dhgroup": "ffdhe3072" 00:15:43.351 } 00:15:43.351 } 00:15:43.351 ]' 00:15:43.351 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.611 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.611 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.611 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:43.611 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.611 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.611 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.611 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.870 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:15:43.870 08:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:15:44.807 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.807 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:44.807 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.807 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.807 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.807 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.807 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:44.807 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:45.070 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:15:45.070 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.070 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:45.070 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:45.070 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:45.070 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.070 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.070 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.070 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.070 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.070 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.071 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.071 08:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.344 00:15:45.344 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.344 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.344 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.603 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.603 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.603 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.603 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.603 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.603 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.603 { 00:15:45.603 "cntlid": 117, 00:15:45.603 "qid": 0, 00:15:45.603 "state": "enabled", 00:15:45.603 "thread": "nvmf_tgt_poll_group_000", 00:15:45.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:45.603 "listen_address": { 00:15:45.603 "trtype": "TCP", 00:15:45.603 "adrfam": "IPv4", 00:15:45.603 "traddr": "10.0.0.3", 00:15:45.603 "trsvcid": "4420" 00:15:45.603 }, 00:15:45.603 "peer_address": { 00:15:45.603 "trtype": "TCP", 00:15:45.603 "adrfam": "IPv4", 00:15:45.603 "traddr": "10.0.0.1", 00:15:45.603 "trsvcid": "50150" 00:15:45.603 }, 00:15:45.603 "auth": { 00:15:45.603 "state": "completed", 00:15:45.603 "digest": "sha512", 00:15:45.603 "dhgroup": "ffdhe3072" 00:15:45.603 } 00:15:45.603 } 00:15:45.603 ]' 00:15:45.603 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.603 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.603 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.861 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:45.861 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.861 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.861 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.861 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.119 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:15:46.119 08:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:15:46.685 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.685 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.685 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:46.685 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.685 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.685 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.685 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.685 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:46.685 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:46.944 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:15:46.944 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.944 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:46.944 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:46.944 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:46.944 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.944 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:15:46.944 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.944 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.944 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.944 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:46.944 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:46.944 08:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.512 00:15:47.512 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.512 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.512 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.771 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.771 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.771 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.771 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.771 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.771 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.771 { 00:15:47.771 "cntlid": 119, 00:15:47.771 "qid": 0, 00:15:47.771 "state": "enabled", 00:15:47.771 "thread": "nvmf_tgt_poll_group_000", 00:15:47.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:47.771 "listen_address": { 00:15:47.771 "trtype": "TCP", 00:15:47.771 "adrfam": "IPv4", 00:15:47.771 "traddr": "10.0.0.3", 00:15:47.771 "trsvcid": "4420" 00:15:47.771 }, 00:15:47.771 "peer_address": { 00:15:47.771 "trtype": "TCP", 00:15:47.771 "adrfam": "IPv4", 00:15:47.771 "traddr": "10.0.0.1", 00:15:47.771 "trsvcid": "58546" 00:15:47.771 }, 00:15:47.771 "auth": { 00:15:47.771 "state": "completed", 00:15:47.771 "digest": "sha512", 00:15:47.771 "dhgroup": "ffdhe3072" 00:15:47.771 } 00:15:47.771 } 00:15:47.771 ]' 00:15:47.771 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.771 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.771 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.771 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:47.771 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.771 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.771 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.771 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.035 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:15:48.036 08:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.972 08:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.539 00:15:49.539 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.539 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.539 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.798 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.798 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.798 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.798 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.798 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.798 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.798 { 00:15:49.798 "cntlid": 121, 00:15:49.798 "qid": 0, 00:15:49.798 "state": "enabled", 00:15:49.798 "thread": "nvmf_tgt_poll_group_000", 00:15:49.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:49.798 "listen_address": { 00:15:49.798 "trtype": "TCP", 00:15:49.798 "adrfam": "IPv4", 00:15:49.798 "traddr": "10.0.0.3", 00:15:49.798 "trsvcid": "4420" 00:15:49.798 }, 00:15:49.798 "peer_address": { 00:15:49.798 "trtype": "TCP", 00:15:49.798 "adrfam": "IPv4", 00:15:49.798 "traddr": "10.0.0.1", 00:15:49.798 "trsvcid": "58568" 00:15:49.798 }, 00:15:49.798 "auth": { 00:15:49.798 "state": "completed", 00:15:49.798 "digest": "sha512", 00:15:49.798 "dhgroup": "ffdhe4096" 00:15:49.798 } 00:15:49.798 } 00:15:49.798 ]' 00:15:49.798 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.798 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:49.798 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.798 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:49.798 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.798 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.798 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.798 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.057 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:15:50.057 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:15:50.624 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.883 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:50.883 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.884 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.884 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.884 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.884 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:50.884 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:50.884 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:15:50.884 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.884 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:50.884 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:50.884 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:50.884 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.884 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.884 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.884 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.143 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.143 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.143 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.143 08:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.402 00:15:51.402 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.402 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.402 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.662 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.662 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.662 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.662 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.662 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.662 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.662 { 00:15:51.662 "cntlid": 123, 00:15:51.662 "qid": 0, 00:15:51.662 "state": "enabled", 00:15:51.662 "thread": "nvmf_tgt_poll_group_000", 00:15:51.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:51.662 "listen_address": { 00:15:51.662 "trtype": "TCP", 00:15:51.662 "adrfam": "IPv4", 00:15:51.662 "traddr": "10.0.0.3", 00:15:51.662 "trsvcid": "4420" 00:15:51.662 }, 00:15:51.662 "peer_address": { 00:15:51.662 "trtype": "TCP", 00:15:51.662 "adrfam": "IPv4", 00:15:51.662 "traddr": "10.0.0.1", 00:15:51.662 "trsvcid": "58586" 00:15:51.662 }, 00:15:51.662 "auth": { 00:15:51.662 "state": "completed", 00:15:51.662 "digest": "sha512", 00:15:51.662 "dhgroup": "ffdhe4096" 00:15:51.662 } 00:15:51.662 } 00:15:51.662 ]' 00:15:51.662 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.662 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:51.662 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.921 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:51.921 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.921 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.921 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.921 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.181 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:15:52.181 08:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:15:52.746 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.746 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:52.746 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.746 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.746 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.746 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:52.746 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:52.746 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:53.005 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:53.005 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.005 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:53.005 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:53.005 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.005 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.005 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.005 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.005 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.005 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.005 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.005 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.005 08:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.328 00:15:53.586 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.586 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.586 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.843 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.843 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.843 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.843 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.843 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.843 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.843 { 00:15:53.843 "cntlid": 125, 00:15:53.843 "qid": 0, 00:15:53.843 "state": "enabled", 00:15:53.843 "thread": "nvmf_tgt_poll_group_000", 00:15:53.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:53.843 "listen_address": { 00:15:53.843 "trtype": "TCP", 00:15:53.843 "adrfam": "IPv4", 00:15:53.843 "traddr": "10.0.0.3", 00:15:53.843 "trsvcid": "4420" 00:15:53.843 }, 00:15:53.843 "peer_address": { 00:15:53.843 "trtype": "TCP", 00:15:53.844 "adrfam": "IPv4", 00:15:53.844 "traddr": "10.0.0.1", 00:15:53.844 "trsvcid": "58612" 00:15:53.844 }, 00:15:53.844 "auth": { 00:15:53.844 "state": "completed", 00:15:53.844 "digest": "sha512", 00:15:53.844 "dhgroup": "ffdhe4096" 00:15:53.844 } 00:15:53.844 } 00:15:53.844 ]' 00:15:53.844 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.844 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.844 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.844 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:53.844 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.844 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.844 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.844 08:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.101 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:15:54.101 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:15:54.669 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.669 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:54.669 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.669 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.669 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.669 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.669 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:54.669 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:55.247 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:55.247 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.247 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:55.247 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:55.247 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.247 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.247 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:15:55.247 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.247 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.247 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.247 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.247 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.247 08:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.539 00:15:55.539 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.539 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.539 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.799 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.799 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.799 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.799 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.799 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.799 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.799 { 00:15:55.799 "cntlid": 127, 00:15:55.799 "qid": 0, 00:15:55.799 "state": "enabled", 00:15:55.799 "thread": "nvmf_tgt_poll_group_000", 00:15:55.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:55.799 "listen_address": { 00:15:55.799 "trtype": "TCP", 00:15:55.799 "adrfam": "IPv4", 00:15:55.799 "traddr": "10.0.0.3", 00:15:55.799 "trsvcid": "4420" 00:15:55.799 }, 00:15:55.799 "peer_address": { 00:15:55.799 "trtype": "TCP", 00:15:55.799 "adrfam": "IPv4", 00:15:55.799 "traddr": "10.0.0.1", 00:15:55.799 "trsvcid": "58648" 00:15:55.799 }, 00:15:55.799 "auth": { 00:15:55.799 "state": "completed", 00:15:55.799 "digest": "sha512", 00:15:55.799 "dhgroup": "ffdhe4096" 00:15:55.799 } 00:15:55.799 } 00:15:55.799 ]' 00:15:55.799 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.799 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.799 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.799 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:55.799 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:56.058 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.058 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.058 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.317 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:15:56.317 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:15:56.884 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.884 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:56.884 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.884 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.142 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.142 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.142 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.142 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:57.142 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:57.400 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:57.400 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.400 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:57.400 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:57.400 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:57.400 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.400 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.400 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.400 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.400 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.400 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.400 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.400 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.658 00:15:57.658 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.658 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.658 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.225 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.225 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.225 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.225 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.225 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.225 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.225 { 00:15:58.225 "cntlid": 129, 00:15:58.225 "qid": 0, 00:15:58.225 "state": "enabled", 00:15:58.225 "thread": "nvmf_tgt_poll_group_000", 00:15:58.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:15:58.225 "listen_address": { 00:15:58.225 "trtype": "TCP", 00:15:58.225 "adrfam": "IPv4", 00:15:58.225 "traddr": "10.0.0.3", 00:15:58.225 "trsvcid": "4420" 00:15:58.225 }, 00:15:58.225 "peer_address": { 00:15:58.225 "trtype": "TCP", 00:15:58.225 "adrfam": "IPv4", 00:15:58.225 "traddr": "10.0.0.1", 00:15:58.225 "trsvcid": "39192" 00:15:58.225 }, 00:15:58.225 "auth": { 00:15:58.225 "state": "completed", 00:15:58.225 "digest": "sha512", 00:15:58.225 "dhgroup": "ffdhe6144" 00:15:58.225 } 00:15:58.225 } 00:15:58.225 ]' 00:15:58.225 08:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.225 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:58.225 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.225 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:58.225 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.225 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.225 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.225 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.483 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:15:58.483 08:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:15:59.419 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.419 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:15:59.419 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.419 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.419 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.419 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.419 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:59.419 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:59.678 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:59.678 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.678 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:59.678 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:59.678 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:59.678 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.678 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.678 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.678 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.678 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.678 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.678 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.678 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:59.937 00:16:00.196 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.196 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.196 08:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.196 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.196 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.196 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.196 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.456 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.456 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.456 { 00:16:00.456 "cntlid": 131, 00:16:00.456 "qid": 0, 00:16:00.456 "state": "enabled", 00:16:00.456 "thread": "nvmf_tgt_poll_group_000", 00:16:00.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:00.456 "listen_address": { 00:16:00.456 "trtype": "TCP", 00:16:00.456 "adrfam": "IPv4", 00:16:00.456 "traddr": "10.0.0.3", 00:16:00.456 "trsvcid": "4420" 00:16:00.456 }, 00:16:00.456 "peer_address": { 00:16:00.456 "trtype": "TCP", 00:16:00.456 "adrfam": "IPv4", 00:16:00.456 "traddr": "10.0.0.1", 00:16:00.456 "trsvcid": "39214" 00:16:00.456 }, 00:16:00.456 "auth": { 00:16:00.456 "state": "completed", 00:16:00.456 "digest": "sha512", 00:16:00.456 "dhgroup": "ffdhe6144" 00:16:00.456 } 00:16:00.456 } 00:16:00.456 ]' 00:16:00.456 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.456 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.456 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.456 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:00.456 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.456 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.456 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.456 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.715 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:16:00.715 08:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:16:01.652 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.653 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:01.653 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.653 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.653 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.653 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.653 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:01.653 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:01.912 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:01.912 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.912 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:01.912 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:01.912 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:01.912 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.912 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.912 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.912 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.912 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.912 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.912 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:01.912 08:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.172 00:16:02.431 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.431 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.431 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.690 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.690 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.690 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.690 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.690 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.690 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.690 { 00:16:02.690 "cntlid": 133, 00:16:02.690 "qid": 0, 00:16:02.690 "state": "enabled", 00:16:02.690 "thread": "nvmf_tgt_poll_group_000", 00:16:02.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:02.690 "listen_address": { 00:16:02.690 "trtype": "TCP", 00:16:02.690 "adrfam": "IPv4", 00:16:02.690 "traddr": "10.0.0.3", 00:16:02.690 "trsvcid": "4420" 00:16:02.690 }, 00:16:02.690 "peer_address": { 00:16:02.690 "trtype": "TCP", 00:16:02.690 "adrfam": "IPv4", 00:16:02.690 "traddr": "10.0.0.1", 00:16:02.690 "trsvcid": "39236" 00:16:02.690 }, 00:16:02.690 "auth": { 00:16:02.690 "state": "completed", 00:16:02.690 "digest": "sha512", 00:16:02.690 "dhgroup": "ffdhe6144" 00:16:02.690 } 00:16:02.690 } 00:16:02.690 ]' 00:16:02.690 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.690 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.690 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.690 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:02.690 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.690 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.690 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.690 08:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.258 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:16:03.258 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:16:03.827 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.827 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:03.827 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.827 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.827 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.827 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.827 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:03.827 08:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:04.396 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:04.396 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.396 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:04.396 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:04.396 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:04.396 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.396 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:16:04.396 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.396 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.396 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.396 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:04.396 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.396 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:04.655 00:16:04.914 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.914 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.914 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.224 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.224 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.224 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.224 08:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.224 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.224 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.224 { 00:16:05.224 "cntlid": 135, 00:16:05.224 "qid": 0, 00:16:05.224 "state": "enabled", 00:16:05.224 "thread": "nvmf_tgt_poll_group_000", 00:16:05.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:05.224 "listen_address": { 00:16:05.224 "trtype": "TCP", 00:16:05.224 "adrfam": "IPv4", 00:16:05.224 "traddr": "10.0.0.3", 00:16:05.224 "trsvcid": "4420" 00:16:05.224 }, 00:16:05.224 "peer_address": { 00:16:05.224 "trtype": "TCP", 00:16:05.224 "adrfam": "IPv4", 00:16:05.224 "traddr": "10.0.0.1", 00:16:05.224 "trsvcid": "39256" 00:16:05.224 }, 00:16:05.224 "auth": { 00:16:05.225 "state": "completed", 00:16:05.225 "digest": "sha512", 00:16:05.225 "dhgroup": "ffdhe6144" 00:16:05.225 } 00:16:05.225 } 00:16:05.225 ]' 00:16:05.225 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.225 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.225 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.225 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:05.225 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.225 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.225 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.225 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.821 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:16:05.821 08:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:16:06.389 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.389 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:06.389 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.389 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.389 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.389 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:06.389 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.389 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:06.389 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:06.647 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:06.647 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.647 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:06.647 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:06.647 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:06.648 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.648 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.648 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.648 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.906 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.906 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.906 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:06.906 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.474 00:16:07.474 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.474 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.474 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.734 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.734 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.734 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.734 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.734 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.734 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.734 { 00:16:07.734 "cntlid": 137, 00:16:07.734 "qid": 0, 00:16:07.734 "state": "enabled", 00:16:07.734 "thread": "nvmf_tgt_poll_group_000", 00:16:07.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:07.734 "listen_address": { 00:16:07.734 "trtype": "TCP", 00:16:07.734 "adrfam": "IPv4", 00:16:07.734 "traddr": "10.0.0.3", 00:16:07.734 "trsvcid": "4420" 00:16:07.734 }, 00:16:07.734 "peer_address": { 00:16:07.734 "trtype": "TCP", 00:16:07.734 "adrfam": "IPv4", 00:16:07.734 "traddr": "10.0.0.1", 00:16:07.734 "trsvcid": "47878" 00:16:07.734 }, 00:16:07.734 "auth": { 00:16:07.734 "state": "completed", 00:16:07.734 "digest": "sha512", 00:16:07.734 "dhgroup": "ffdhe8192" 00:16:07.734 } 00:16:07.734 } 00:16:07.734 ]' 00:16:07.734 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.999 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.999 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.999 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:07.999 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.999 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.999 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.000 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.258 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:16:08.258 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:16:09.194 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.194 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:09.194 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.194 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.194 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.194 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.194 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:09.194 08:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:09.453 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:09.454 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.454 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.454 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:09.454 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:09.454 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.454 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.454 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.454 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.454 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.454 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.454 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.454 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.020 00:16:10.020 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.020 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.020 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.277 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.277 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.277 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.277 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.277 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.277 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.277 { 00:16:10.277 "cntlid": 139, 00:16:10.277 "qid": 0, 00:16:10.277 "state": "enabled", 00:16:10.277 "thread": "nvmf_tgt_poll_group_000", 00:16:10.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:10.277 "listen_address": { 00:16:10.277 "trtype": "TCP", 00:16:10.277 "adrfam": "IPv4", 00:16:10.277 "traddr": "10.0.0.3", 00:16:10.277 "trsvcid": "4420" 00:16:10.277 }, 00:16:10.277 "peer_address": { 00:16:10.277 "trtype": "TCP", 00:16:10.277 "adrfam": "IPv4", 00:16:10.277 "traddr": "10.0.0.1", 00:16:10.277 "trsvcid": "47912" 00:16:10.277 }, 00:16:10.277 "auth": { 00:16:10.277 "state": "completed", 00:16:10.277 "digest": "sha512", 00:16:10.277 "dhgroup": "ffdhe8192" 00:16:10.277 } 00:16:10.277 } 00:16:10.277 ]' 00:16:10.277 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.277 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.277 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.277 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:10.277 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.535 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.535 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.535 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.793 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:16:10.793 08:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: --dhchap-ctrl-secret DHHC-1:02:MTlkYTMzZThhZWQ4Mzc3ZGMxNjQ5MTBlZjJiZGNlM2FjNTNlNDk0YmQ1YzQ2MjM159iABA==: 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.729 08:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.664 00:16:12.664 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.664 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.664 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.924 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.924 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.924 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.924 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.924 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.924 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.924 { 00:16:12.924 "cntlid": 141, 00:16:12.924 "qid": 0, 00:16:12.924 "state": "enabled", 00:16:12.924 "thread": "nvmf_tgt_poll_group_000", 00:16:12.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:12.924 "listen_address": { 00:16:12.924 "trtype": "TCP", 00:16:12.924 "adrfam": "IPv4", 00:16:12.924 "traddr": "10.0.0.3", 00:16:12.924 "trsvcid": "4420" 00:16:12.924 }, 00:16:12.924 "peer_address": { 00:16:12.924 "trtype": "TCP", 00:16:12.924 "adrfam": "IPv4", 00:16:12.924 "traddr": "10.0.0.1", 00:16:12.924 "trsvcid": "47940" 00:16:12.924 }, 00:16:12.924 "auth": { 00:16:12.924 "state": "completed", 00:16:12.924 "digest": "sha512", 00:16:12.924 "dhgroup": "ffdhe8192" 00:16:12.924 } 00:16:12.924 } 00:16:12.924 ]' 00:16:12.924 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.924 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.924 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.924 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:12.924 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.924 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.924 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.924 08:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.182 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:16:13.182 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:01:Y2MwZjk4MTgyNTExZTRiNzExYzI5YjE5NzJkNzc2OWNXOHsr: 00:16:14.117 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.117 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:14.117 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.117 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.117 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.117 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.117 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:14.117 08:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:14.376 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:14.376 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.376 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:14.376 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:14.376 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:14.376 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.376 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:16:14.376 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.376 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.376 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.376 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:14.376 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.376 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.943 00:16:14.943 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.943 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.943 08:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.202 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.202 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.202 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.202 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.461 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.461 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.461 { 00:16:15.461 "cntlid": 143, 00:16:15.461 "qid": 0, 00:16:15.461 "state": "enabled", 00:16:15.461 "thread": "nvmf_tgt_poll_group_000", 00:16:15.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:15.461 "listen_address": { 00:16:15.461 "trtype": "TCP", 00:16:15.461 "adrfam": "IPv4", 00:16:15.461 "traddr": "10.0.0.3", 00:16:15.461 "trsvcid": "4420" 00:16:15.461 }, 00:16:15.461 "peer_address": { 00:16:15.461 "trtype": "TCP", 00:16:15.461 "adrfam": "IPv4", 00:16:15.461 "traddr": "10.0.0.1", 00:16:15.461 "trsvcid": "47958" 00:16:15.461 }, 00:16:15.461 "auth": { 00:16:15.461 "state": "completed", 00:16:15.461 "digest": "sha512", 00:16:15.461 "dhgroup": "ffdhe8192" 00:16:15.461 } 00:16:15.461 } 00:16:15.461 ]' 00:16:15.461 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.461 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.461 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.461 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:15.461 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.461 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.461 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.461 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.720 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:16:15.720 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:16:16.665 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.665 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:16.665 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.665 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.665 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.665 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:16.665 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:16.665 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:16.665 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:16.665 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:16.666 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:16.925 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:16.925 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.925 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:16.925 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:16.925 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:16.925 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.925 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.925 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.925 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.925 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.925 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.925 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.925 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.859 00:16:17.859 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.859 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.859 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.118 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.118 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.118 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.118 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.118 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.118 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.118 { 00:16:18.118 "cntlid": 145, 00:16:18.118 "qid": 0, 00:16:18.118 "state": "enabled", 00:16:18.118 "thread": "nvmf_tgt_poll_group_000", 00:16:18.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:18.118 "listen_address": { 00:16:18.118 "trtype": "TCP", 00:16:18.118 "adrfam": "IPv4", 00:16:18.118 "traddr": "10.0.0.3", 00:16:18.118 "trsvcid": "4420" 00:16:18.118 }, 00:16:18.118 "peer_address": { 00:16:18.118 "trtype": "TCP", 00:16:18.118 "adrfam": "IPv4", 00:16:18.118 "traddr": "10.0.0.1", 00:16:18.118 "trsvcid": "58188" 00:16:18.118 }, 00:16:18.118 "auth": { 00:16:18.118 "state": "completed", 00:16:18.118 "digest": "sha512", 00:16:18.118 "dhgroup": "ffdhe8192" 00:16:18.118 } 00:16:18.118 } 00:16:18.118 ]' 00:16:18.118 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.118 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.118 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.118 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:18.118 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.118 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.118 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.118 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.685 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:16:18.685 08:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:00:NTQwMTFiZWI3ZmE3NmMwY2Q0YzEyY2QxZGJlYzVmYWM3MWExYzAyMmVlODIzNDk33ns4zw==: --dhchap-ctrl-secret DHHC-1:03:MjAxOWM5ZWIwMjdkZmU5OGU2NTBlMmUwN2M0MzE5MTMzYmEyNjUzMDg3NzI5ZTRjMjU5NmU5MmQyZTU1ZGY3OTrFVgw=: 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:19.253 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:19.822 request: 00:16:19.822 { 00:16:19.822 "name": "nvme0", 00:16:19.822 "trtype": "tcp", 00:16:19.822 "traddr": "10.0.0.3", 00:16:19.822 "adrfam": "ipv4", 00:16:19.822 "trsvcid": "4420", 00:16:19.822 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:19.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:19.822 "prchk_reftag": false, 00:16:19.822 "prchk_guard": false, 00:16:19.822 "hdgst": false, 00:16:19.822 "ddgst": false, 00:16:19.822 "dhchap_key": "key2", 00:16:19.822 "allow_unrecognized_csi": false, 00:16:19.822 "method": "bdev_nvme_attach_controller", 00:16:19.822 "req_id": 1 00:16:19.822 } 00:16:19.822 Got JSON-RPC error response 00:16:19.822 response: 00:16:19.822 { 00:16:19.822 "code": -5, 00:16:19.822 "message": "Input/output error" 00:16:19.822 } 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:19.822 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:20.391 request: 00:16:20.391 { 00:16:20.391 "name": "nvme0", 00:16:20.391 "trtype": "tcp", 00:16:20.391 "traddr": "10.0.0.3", 00:16:20.391 "adrfam": "ipv4", 00:16:20.391 "trsvcid": "4420", 00:16:20.391 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:20.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:20.391 "prchk_reftag": false, 00:16:20.391 "prchk_guard": false, 00:16:20.391 "hdgst": false, 00:16:20.391 "ddgst": false, 00:16:20.391 "dhchap_key": "key1", 00:16:20.391 "dhchap_ctrlr_key": "ckey2", 00:16:20.391 "allow_unrecognized_csi": false, 00:16:20.391 "method": "bdev_nvme_attach_controller", 00:16:20.391 "req_id": 1 00:16:20.391 } 00:16:20.391 Got JSON-RPC error response 00:16:20.391 response: 00:16:20.391 { 00:16:20.391 "code": -5, 00:16:20.391 "message": "Input/output error" 00:16:20.391 } 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.391 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.330 request: 00:16:21.330 { 00:16:21.330 "name": "nvme0", 00:16:21.330 "trtype": "tcp", 00:16:21.330 "traddr": "10.0.0.3", 00:16:21.330 "adrfam": "ipv4", 00:16:21.330 "trsvcid": "4420", 00:16:21.330 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:21.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:21.330 "prchk_reftag": false, 00:16:21.330 "prchk_guard": false, 00:16:21.330 "hdgst": false, 00:16:21.330 "ddgst": false, 00:16:21.330 "dhchap_key": "key1", 00:16:21.330 "dhchap_ctrlr_key": "ckey1", 00:16:21.330 "allow_unrecognized_csi": false, 00:16:21.330 "method": "bdev_nvme_attach_controller", 00:16:21.330 "req_id": 1 00:16:21.330 } 00:16:21.330 Got JSON-RPC error response 00:16:21.330 response: 00:16:21.330 { 00:16:21.330 "code": -5, 00:16:21.330 "message": "Input/output error" 00:16:21.330 } 00:16:21.330 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 70036 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 70036 ']' 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 70036 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70036 00:16:21.330 killing process with pid 70036 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70036' 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 70036 00:16:21.330 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 70036 00:16:22.267 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:22.267 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:22.267 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:22.267 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.267 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=73110 00:16:22.267 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:22.267 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 73110 00:16:22.267 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 73110 ']' 00:16:22.267 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.267 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.268 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.268 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.268 08:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.205 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.205 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:23.206 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:23.206 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:23.206 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.464 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.464 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:23.464 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 73110 00:16:23.464 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 73110 ']' 00:16:23.464 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.464 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:23.464 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.464 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:23.464 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.723 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.723 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:23.723 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:23.723 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.723 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.982 null0 00:16:23.982 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.982 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:23.982 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.AQW 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.kxa ]] 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kxa 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Yap 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.hHr ]] 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hHr 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.CvW 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.IPW ]] 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IPW 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.6dB 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:23.983 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.938 nvme0n1 00:16:24.938 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.938 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.938 08:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.523 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.523 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.523 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.523 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.523 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.523 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.523 { 00:16:25.523 "cntlid": 1, 00:16:25.523 "qid": 0, 00:16:25.523 "state": "enabled", 00:16:25.523 "thread": "nvmf_tgt_poll_group_000", 00:16:25.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:25.523 "listen_address": { 00:16:25.523 "trtype": "TCP", 00:16:25.523 "adrfam": "IPv4", 00:16:25.523 "traddr": "10.0.0.3", 00:16:25.523 "trsvcid": "4420" 00:16:25.523 }, 00:16:25.523 "peer_address": { 00:16:25.523 "trtype": "TCP", 00:16:25.523 "adrfam": "IPv4", 00:16:25.523 "traddr": "10.0.0.1", 00:16:25.523 "trsvcid": "58236" 00:16:25.523 }, 00:16:25.523 "auth": { 00:16:25.523 "state": "completed", 00:16:25.523 "digest": "sha512", 00:16:25.523 "dhgroup": "ffdhe8192" 00:16:25.523 } 00:16:25.523 } 00:16:25.523 ]' 00:16:25.523 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.523 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.523 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.523 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:25.523 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.523 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.523 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.523 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.782 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:16:25.782 08:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:16:26.720 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.720 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:26.720 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.720 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.720 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.720 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key3 00:16:26.720 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.720 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.720 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.720 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:26.720 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:26.979 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:26.979 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:26.979 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:26.980 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:26.980 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.980 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:26.980 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:26.980 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:26.980 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:26.980 08:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.239 request: 00:16:27.239 { 00:16:27.239 "name": "nvme0", 00:16:27.239 "trtype": "tcp", 00:16:27.239 "traddr": "10.0.0.3", 00:16:27.239 "adrfam": "ipv4", 00:16:27.239 "trsvcid": "4420", 00:16:27.239 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:27.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:27.239 "prchk_reftag": false, 00:16:27.239 "prchk_guard": false, 00:16:27.239 "hdgst": false, 00:16:27.239 "ddgst": false, 00:16:27.239 "dhchap_key": "key3", 00:16:27.239 "allow_unrecognized_csi": false, 00:16:27.239 "method": "bdev_nvme_attach_controller", 00:16:27.239 "req_id": 1 00:16:27.239 } 00:16:27.239 Got JSON-RPC error response 00:16:27.239 response: 00:16:27.239 { 00:16:27.239 "code": -5, 00:16:27.239 "message": "Input/output error" 00:16:27.239 } 00:16:27.239 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:27.239 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:27.239 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:27.239 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:27.240 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:27.240 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:27.240 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:27.240 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:27.499 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:27.499 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:27.499 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:27.499 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:27.499 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.499 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:27.499 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:27.499 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.499 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.499 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.758 request: 00:16:27.758 { 00:16:27.758 "name": "nvme0", 00:16:27.758 "trtype": "tcp", 00:16:27.758 "traddr": "10.0.0.3", 00:16:27.758 "adrfam": "ipv4", 00:16:27.758 "trsvcid": "4420", 00:16:27.758 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:27.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:27.758 "prchk_reftag": false, 00:16:27.758 "prchk_guard": false, 00:16:27.758 "hdgst": false, 00:16:27.758 "ddgst": false, 00:16:27.758 "dhchap_key": "key3", 00:16:27.758 "allow_unrecognized_csi": false, 00:16:27.758 "method": "bdev_nvme_attach_controller", 00:16:27.758 "req_id": 1 00:16:27.758 } 00:16:27.758 Got JSON-RPC error response 00:16:27.758 response: 00:16:27.758 { 00:16:27.758 "code": -5, 00:16:27.758 "message": "Input/output error" 00:16:27.758 } 00:16:27.758 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:27.758 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:27.758 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:27.758 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:27.758 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:27.758 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:27.758 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:27.758 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:27.758 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:27.758 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:28.021 08:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:28.588 request: 00:16:28.588 { 00:16:28.588 "name": "nvme0", 00:16:28.588 "trtype": "tcp", 00:16:28.588 "traddr": "10.0.0.3", 00:16:28.588 "adrfam": "ipv4", 00:16:28.588 "trsvcid": "4420", 00:16:28.588 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:28.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:28.588 "prchk_reftag": false, 00:16:28.588 "prchk_guard": false, 00:16:28.588 "hdgst": false, 00:16:28.588 "ddgst": false, 00:16:28.588 "dhchap_key": "key0", 00:16:28.588 "dhchap_ctrlr_key": "key1", 00:16:28.588 "allow_unrecognized_csi": false, 00:16:28.588 "method": "bdev_nvme_attach_controller", 00:16:28.588 "req_id": 1 00:16:28.588 } 00:16:28.588 Got JSON-RPC error response 00:16:28.588 response: 00:16:28.588 { 00:16:28.588 "code": -5, 00:16:28.588 "message": "Input/output error" 00:16:28.588 } 00:16:28.588 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:28.588 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:28.588 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:28.588 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:28.589 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:28.589 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:28.589 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:28.847 nvme0n1 00:16:28.847 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:28.847 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.847 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:29.105 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.105 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.105 08:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.364 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 00:16:29.364 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.364 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.364 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.364 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:29.364 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:29.364 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:30.300 nvme0n1 00:16:30.300 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:30.300 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:30.301 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.868 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.868 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:30.868 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.868 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.868 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.868 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:30.868 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.868 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:31.127 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.127 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:16:31.127 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid b09210cb-7022-43fe-9129-03e098f7a403 -l 0 --dhchap-secret DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: --dhchap-ctrl-secret DHHC-1:03:Y2NiOTdhZDQ4YTIyMjJjZGM0NzZiNTI2ODdmM2I4MDk4NTQ2YTM2NjdkN2JmN2Q2OGUyYjZlNTk1OTZhOWJlOelGLjg=: 00:16:32.062 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:32.062 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:32.062 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:32.062 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:32.062 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:32.062 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:32.062 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:32.062 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.062 08:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.062 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:32.062 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:32.062 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:32.062 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:16:32.062 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:32.062 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:16:32.062 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:32.062 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:32.062 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:32.062 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:33.000 request: 00:16:33.000 { 00:16:33.000 "name": "nvme0", 00:16:33.000 "trtype": "tcp", 00:16:33.000 "traddr": "10.0.0.3", 00:16:33.000 "adrfam": "ipv4", 00:16:33.000 "trsvcid": "4420", 00:16:33.000 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:33.000 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403", 00:16:33.000 "prchk_reftag": false, 00:16:33.000 "prchk_guard": false, 00:16:33.000 "hdgst": false, 00:16:33.000 "ddgst": false, 00:16:33.000 "dhchap_key": "key1", 00:16:33.000 "allow_unrecognized_csi": false, 00:16:33.000 "method": "bdev_nvme_attach_controller", 00:16:33.000 "req_id": 1 00:16:33.000 } 00:16:33.000 Got JSON-RPC error response 00:16:33.000 response: 00:16:33.000 { 00:16:33.000 "code": -5, 00:16:33.000 "message": "Input/output error" 00:16:33.000 } 00:16:33.000 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:33.000 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:33.000 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:33.000 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:33.000 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:33.000 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:33.000 08:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:33.937 nvme0n1 00:16:33.937 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:33.937 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:33.937 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.937 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.937 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.937 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.537 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:34.537 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.537 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.537 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.537 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:34.537 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:34.537 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:34.795 nvme0n1 00:16:34.795 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:34.795 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.795 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:35.053 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.053 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.053 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: '' 2s 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: ]] 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NzUwNjdiMmVkOTMzNmExZDEwMmMzZDg1ZjdiNjNmZDWVtwtS: 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:35.312 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: 2s 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: ]] 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmU1YTJlNjJmNjQzM2Y3NzJjZGQ0MzU3NDJkZjdiMDI4NTc2ZGZhOTllZjk0ODUx3IsjXg==: 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:37.845 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:39.746 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:39.746 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:16:39.746 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:16:39.746 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:16:39.746 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:16:39.746 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:16:39.746 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:16:39.746 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.746 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:39.746 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.746 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.746 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.746 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:39.746 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:39.747 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:40.313 nvme0n1 00:16:40.572 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:40.572 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.572 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.572 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.572 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:40.572 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:41.140 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:41.140 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:41.140 08:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.398 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.398 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:41.398 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.398 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.398 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.398 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:41.398 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:41.657 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:41.657 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.657 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:41.916 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.916 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:41.916 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.916 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.916 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.916 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:41.916 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:41.916 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:41.916 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:41.916 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.916 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:41.916 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:41.916 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:41.916 08:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:42.483 request: 00:16:42.483 { 00:16:42.483 "name": "nvme0", 00:16:42.483 "dhchap_key": "key1", 00:16:42.483 "dhchap_ctrlr_key": "key3", 00:16:42.483 "method": "bdev_nvme_set_keys", 00:16:42.483 "req_id": 1 00:16:42.483 } 00:16:42.483 Got JSON-RPC error response 00:16:42.483 response: 00:16:42.483 { 00:16:42.483 "code": -13, 00:16:42.483 "message": "Permission denied" 00:16:42.483 } 00:16:42.483 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:42.483 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:42.483 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:42.483 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:42.483 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:42.483 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.483 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:42.743 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:42.743 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:16:44.120 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:44.120 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:44.120 08:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.120 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:16:44.120 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:44.120 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.120 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.120 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.120 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:44.120 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:44.120 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:45.058 nvme0n1 00:16:45.058 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:45.058 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.058 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.058 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.058 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:45.058 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:45.058 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:45.058 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:45.058 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.058 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:45.058 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:45.058 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:45.058 08:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:45.625 request: 00:16:45.625 { 00:16:45.625 "name": "nvme0", 00:16:45.625 "dhchap_key": "key2", 00:16:45.625 "dhchap_ctrlr_key": "key0", 00:16:45.625 "method": "bdev_nvme_set_keys", 00:16:45.625 "req_id": 1 00:16:45.625 } 00:16:45.625 Got JSON-RPC error response 00:16:45.625 response: 00:16:45.625 { 00:16:45.625 "code": -13, 00:16:45.625 "message": "Permission denied" 00:16:45.625 } 00:16:45.625 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:45.625 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:45.625 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:45.625 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:45.625 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:45.625 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:45.625 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.884 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:16:45.884 08:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:16:46.861 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:46.861 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.861 08:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:47.429 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:16:47.429 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:16:47.429 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:16:47.429 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 70068 00:16:47.429 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 70068 ']' 00:16:47.429 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 70068 00:16:47.429 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:47.429 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:47.429 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70068 00:16:47.429 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:47.429 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:47.429 killing process with pid 70068 00:16:47.429 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70068' 00:16:47.429 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 70068 00:16:47.429 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 70068 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:49.339 rmmod nvme_tcp 00:16:49.339 rmmod nvme_fabrics 00:16:49.339 rmmod nvme_keyring 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 73110 ']' 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 73110 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 73110 ']' 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 73110 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73110 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:49.339 killing process with pid 73110 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73110' 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 73110 00:16:49.339 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 73110 00:16:50.717 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:50.717 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:50.717 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:50.717 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.AQW /tmp/spdk.key-sha256.Yap /tmp/spdk.key-sha384.CvW /tmp/spdk.key-sha512.6dB /tmp/spdk.key-sha512.kxa /tmp/spdk.key-sha384.hHr /tmp/spdk.key-sha256.IPW '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:16:50.718 ************************************ 00:16:50.718 END TEST nvmf_auth_target 00:16:50.718 ************************************ 00:16:50.718 00:16:50.718 real 3m16.637s 00:16:50.718 user 7m49.171s 00:16:50.718 sys 0m27.404s 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:50.718 ************************************ 00:16:50.718 START TEST nvmf_bdevio_no_huge 00:16:50.718 ************************************ 00:16:50.718 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:50.977 * Looking for test storage... 00:16:50.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:16:50.977 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:50.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.978 --rc genhtml_branch_coverage=1 00:16:50.978 --rc genhtml_function_coverage=1 00:16:50.978 --rc genhtml_legend=1 00:16:50.978 --rc geninfo_all_blocks=1 00:16:50.978 --rc geninfo_unexecuted_blocks=1 00:16:50.978 00:16:50.978 ' 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:50.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.978 --rc genhtml_branch_coverage=1 00:16:50.978 --rc genhtml_function_coverage=1 00:16:50.978 --rc genhtml_legend=1 00:16:50.978 --rc geninfo_all_blocks=1 00:16:50.978 --rc geninfo_unexecuted_blocks=1 00:16:50.978 00:16:50.978 ' 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:50.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.978 --rc genhtml_branch_coverage=1 00:16:50.978 --rc genhtml_function_coverage=1 00:16:50.978 --rc genhtml_legend=1 00:16:50.978 --rc geninfo_all_blocks=1 00:16:50.978 --rc geninfo_unexecuted_blocks=1 00:16:50.978 00:16:50.978 ' 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:50.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.978 --rc genhtml_branch_coverage=1 00:16:50.978 --rc genhtml_function_coverage=1 00:16:50.978 --rc genhtml_legend=1 00:16:50.978 --rc geninfo_all_blocks=1 00:16:50.978 --rc geninfo_unexecuted_blocks=1 00:16:50.978 00:16:50.978 ' 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:50.978 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:50.978 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:50.979 Cannot find device "nvmf_init_br" 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:50.979 Cannot find device "nvmf_init_br2" 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:50.979 Cannot find device "nvmf_tgt_br" 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:16:50.979 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:51.237 Cannot find device "nvmf_tgt_br2" 00:16:51.237 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:16:51.237 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:51.237 Cannot find device "nvmf_init_br" 00:16:51.237 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:16:51.237 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:51.237 Cannot find device "nvmf_init_br2" 00:16:51.237 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:16:51.237 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:51.237 Cannot find device "nvmf_tgt_br" 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:51.237 Cannot find device "nvmf_tgt_br2" 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:51.237 Cannot find device "nvmf_br" 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:51.237 Cannot find device "nvmf_init_if" 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:51.237 Cannot find device "nvmf_init_if2" 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:51.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:51.237 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:51.237 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:51.238 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:51.238 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:51.238 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:51.238 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:51.238 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:51.238 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:51.238 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:51.238 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:51.238 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:51.238 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:51.238 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:51.238 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:51.238 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:51.496 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:51.496 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:16:51.496 00:16:51.496 --- 10.0.0.3 ping statistics --- 00:16:51.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.496 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:51.496 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:51.496 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:16:51.496 00:16:51.496 --- 10.0.0.4 ping statistics --- 00:16:51.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.496 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:51.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:16:51.496 00:16:51.496 --- 10.0.0.1 ping statistics --- 00:16:51.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.496 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:51.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:16:51.496 00:16:51.496 --- 10.0.0.2 ping statistics --- 00:16:51.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.496 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=73789 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 73789 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 73789 ']' 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.496 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:51.496 [2024-09-28 08:54:29.462169] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:51.496 [2024-09-28 08:54:29.462341] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:51.754 [2024-09-28 08:54:29.665561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:52.012 [2024-09-28 08:54:29.988176] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.012 [2024-09-28 08:54:29.988254] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.012 [2024-09-28 08:54:29.988274] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.012 [2024-09-28 08:54:29.988291] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.012 [2024-09-28 08:54:29.988307] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.012 [2024-09-28 08:54:29.988507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:16:52.012 [2024-09-28 08:54:29.988675] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:16:52.012 [2024-09-28 08:54:29.989049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:16:52.012 [2024-09-28 08:54:29.989178] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.270 [2024-09-28 08:54:30.180604] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:52.529 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.529 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:16:52.529 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:52.529 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:52.529 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:52.529 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.529 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:52.529 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.529 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:52.529 [2024-09-28 08:54:30.514432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:52.787 Malloc0 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:52.787 [2024-09-28 08:54:30.610384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:16:52.787 { 00:16:52.787 "params": { 00:16:52.787 "name": "Nvme$subsystem", 00:16:52.787 "trtype": "$TEST_TRANSPORT", 00:16:52.787 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.787 "adrfam": "ipv4", 00:16:52.787 "trsvcid": "$NVMF_PORT", 00:16:52.787 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.787 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.787 "hdgst": ${hdgst:-false}, 00:16:52.787 "ddgst": ${ddgst:-false} 00:16:52.787 }, 00:16:52.787 "method": "bdev_nvme_attach_controller" 00:16:52.787 } 00:16:52.787 EOF 00:16:52.787 )") 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:16:52.787 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:16:52.787 "params": { 00:16:52.787 "name": "Nvme1", 00:16:52.787 "trtype": "tcp", 00:16:52.787 "traddr": "10.0.0.3", 00:16:52.787 "adrfam": "ipv4", 00:16:52.787 "trsvcid": "4420", 00:16:52.787 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.787 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.787 "hdgst": false, 00:16:52.787 "ddgst": false 00:16:52.787 }, 00:16:52.787 "method": "bdev_nvme_attach_controller" 00:16:52.787 }' 00:16:52.787 [2024-09-28 08:54:30.722440] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:52.787 [2024-09-28 08:54:30.722616] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid73831 ] 00:16:53.046 [2024-09-28 08:54:30.923937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:53.304 [2024-09-28 08:54:31.187437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.304 [2024-09-28 08:54:31.187535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.304 [2024-09-28 08:54:31.187547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.562 [2024-09-28 08:54:31.351881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:53.820 I/O targets: 00:16:53.820 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:53.820 00:16:53.820 00:16:53.820 CUnit - A unit testing framework for C - Version 2.1-3 00:16:53.820 http://cunit.sourceforge.net/ 00:16:53.820 00:16:53.820 00:16:53.820 Suite: bdevio tests on: Nvme1n1 00:16:53.820 Test: blockdev write read block ...passed 00:16:53.820 Test: blockdev write zeroes read block ...passed 00:16:53.820 Test: blockdev write zeroes read no split ...passed 00:16:53.820 Test: blockdev write zeroes read split ...passed 00:16:53.820 Test: blockdev write zeroes read split partial ...passed 00:16:53.820 Test: blockdev reset ...[2024-09-28 08:54:31.708375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:53.821 [2024-09-28 08:54:31.708789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:16:53.821 [2024-09-28 08:54:31.727760] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:53.821 passed 00:16:53.821 Test: blockdev write read 8 blocks ...passed 00:16:53.821 Test: blockdev write read size > 128k ...passed 00:16:53.821 Test: blockdev write read invalid size ...passed 00:16:53.821 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:53.821 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:53.821 Test: blockdev write read max offset ...passed 00:16:53.821 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:53.821 Test: blockdev writev readv 8 blocks ...passed 00:16:53.821 Test: blockdev writev readv 30 x 1block ...passed 00:16:53.821 Test: blockdev writev readv block ...passed 00:16:53.821 Test: blockdev writev readv size > 128k ...passed 00:16:53.821 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:53.821 Test: blockdev comparev and writev ...[2024-09-28 08:54:31.740651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.821 [2024-09-28 08:54:31.740721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:53.821 [2024-09-28 08:54:31.740769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.821 [2024-09-28 08:54:31.740819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:53.821 [2024-09-28 08:54:31.741436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.821 [2024-09-28 08:54:31.741488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:53.821 [2024-09-28 08:54:31.741517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.821 [2024-09-28 08:54:31.741537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:53.821 [2024-09-28 08:54:31.742082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.821 [2024-09-28 08:54:31.742132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:53.821 [2024-09-28 08:54:31.742171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.821 [2024-09-28 08:54:31.742201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:53.821 [2024-09-28 08:54:31.742838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.821 [2024-09-28 08:54:31.742892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:53.821 [2024-09-28 08:54:31.742922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:53.821 [2024-09-28 08:54:31.742943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:53.821 passed 00:16:53.821 Test: blockdev nvme passthru rw ...passed 00:16:53.821 Test: blockdev nvme passthru vendor specific ...[2024-09-28 08:54:31.743982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:53.821 [2024-09-28 08:54:31.744037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:53.821 [2024-09-28 08:54:31.744204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:53.821 [2024-09-28 08:54:31.744236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:53.821 [2024-09-28 08:54:31.744398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:53.821 [2024-09-28 08:54:31.744439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:53.821 [2024-09-28 08:54:31.744596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:53.821 [2024-09-28 08:54:31.744637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:53.821 passed 00:16:53.821 Test: blockdev nvme admin passthru ...passed 00:16:53.821 Test: blockdev copy ...passed 00:16:53.821 00:16:53.821 Run Summary: Type Total Ran Passed Failed Inactive 00:16:53.821 suites 1 1 n/a 0 0 00:16:53.821 tests 23 23 23 0 0 00:16:53.821 asserts 152 152 152 0 n/a 00:16:53.821 00:16:53.821 Elapsed time = 0.268 seconds 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:54.756 rmmod nvme_tcp 00:16:54.756 rmmod nvme_fabrics 00:16:54.756 rmmod nvme_keyring 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 73789 ']' 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 73789 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 73789 ']' 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 73789 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73789 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:16:54.756 killing process with pid 73789 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73789' 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 73789 00:16:54.756 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 73789 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:55.696 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:16:55.954 00:16:55.954 real 0m5.151s 00:16:55.954 user 0m17.164s 00:16:55.954 sys 0m1.669s 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:55.954 ************************************ 00:16:55.954 END TEST nvmf_bdevio_no_huge 00:16:55.954 ************************************ 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:55.954 ************************************ 00:16:55.954 START TEST nvmf_tls 00:16:55.954 ************************************ 00:16:55.954 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:56.213 * Looking for test storage... 00:16:56.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:56.213 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:56.213 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:16:56.213 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.213 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:56.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.214 --rc genhtml_branch_coverage=1 00:16:56.214 --rc genhtml_function_coverage=1 00:16:56.214 --rc genhtml_legend=1 00:16:56.214 --rc geninfo_all_blocks=1 00:16:56.214 --rc geninfo_unexecuted_blocks=1 00:16:56.214 00:16:56.214 ' 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:56.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.214 --rc genhtml_branch_coverage=1 00:16:56.214 --rc genhtml_function_coverage=1 00:16:56.214 --rc genhtml_legend=1 00:16:56.214 --rc geninfo_all_blocks=1 00:16:56.214 --rc geninfo_unexecuted_blocks=1 00:16:56.214 00:16:56.214 ' 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:56.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.214 --rc genhtml_branch_coverage=1 00:16:56.214 --rc genhtml_function_coverage=1 00:16:56.214 --rc genhtml_legend=1 00:16:56.214 --rc geninfo_all_blocks=1 00:16:56.214 --rc geninfo_unexecuted_blocks=1 00:16:56.214 00:16:56.214 ' 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:56.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.214 --rc genhtml_branch_coverage=1 00:16:56.214 --rc genhtml_function_coverage=1 00:16:56.214 --rc genhtml_legend=1 00:16:56.214 --rc geninfo_all_blocks=1 00:16:56.214 --rc geninfo_unexecuted_blocks=1 00:16:56.214 00:16:56.214 ' 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.214 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.214 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:56.215 Cannot find device "nvmf_init_br" 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:56.215 Cannot find device "nvmf_init_br2" 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:56.215 Cannot find device "nvmf_tgt_br" 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:56.215 Cannot find device "nvmf_tgt_br2" 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:56.215 Cannot find device "nvmf_init_br" 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:56.215 Cannot find device "nvmf_init_br2" 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:56.215 Cannot find device "nvmf_tgt_br" 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:16:56.215 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:56.474 Cannot find device "nvmf_tgt_br2" 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:56.474 Cannot find device "nvmf_br" 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:56.474 Cannot find device "nvmf_init_if" 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:56.474 Cannot find device "nvmf_init_if2" 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:56.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:56.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:56.474 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:56.475 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:56.475 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:56.475 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:56.475 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:56.475 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:56.475 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:56.475 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:56.475 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:56.475 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:56.475 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:56.475 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:56.475 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:56.733 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:56.733 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:16:56.733 00:16:56.733 --- 10.0.0.3 ping statistics --- 00:16:56.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.733 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:56.733 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:56.733 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:16:56.733 00:16:56.733 --- 10.0.0.4 ping statistics --- 00:16:56.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.733 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:56.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:16:56.733 00:16:56.733 --- 10.0.0.1 ping statistics --- 00:16:56.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.733 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:56.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:16:56.733 00:16:56.733 --- 10.0.0.2 ping statistics --- 00:16:56.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.733 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:56.733 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=74102 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 74102 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74102 ']' 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:56.734 08:54:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:56.734 [2024-09-28 08:54:34.660168] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:56.734 [2024-09-28 08:54:34.660329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.992 [2024-09-28 08:54:34.839433] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.250 [2024-09-28 08:54:35.061616] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.250 [2024-09-28 08:54:35.061693] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.250 [2024-09-28 08:54:35.061716] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.250 [2024-09-28 08:54:35.061735] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.250 [2024-09-28 08:54:35.061749] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.250 [2024-09-28 08:54:35.061790] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.816 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:57.816 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:16:57.816 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:57.816 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:57.816 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.816 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.816 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:16:57.816 08:54:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:58.107 true 00:16:58.107 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:16:58.107 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:58.365 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:16:58.365 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:16:58.365 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:58.625 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:16:58.625 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:59.192 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:16:59.192 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:16:59.192 08:54:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:59.452 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:59.452 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:16:59.711 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:16:59.711 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:16:59.711 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:59.711 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:16:59.970 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:16:59.970 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:16:59.970 08:54:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:00.230 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:00.230 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:00.489 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:00.489 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:00.489 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:00.748 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:00.748 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.frY9N5uuFJ 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Uwzilhycp4 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.frY9N5uuFJ 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Uwzilhycp4 00:17:01.007 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:01.574 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:01.833 [2024-09-28 08:54:39.749738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:02.091 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.frY9N5uuFJ 00:17:02.091 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.frY9N5uuFJ 00:17:02.091 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:02.356 [2024-09-28 08:54:40.122663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.356 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:02.615 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:02.615 [2024-09-28 08:54:40.594714] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:02.615 [2024-09-28 08:54:40.595211] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:02.873 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:03.131 malloc0 00:17:03.131 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:03.389 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.frY9N5uuFJ 00:17:03.647 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:03.904 08:54:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.frY9N5uuFJ 00:17:16.106 Initializing NVMe Controllers 00:17:16.106 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:16.106 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:16.106 Initialization complete. Launching workers. 00:17:16.106 ======================================================== 00:17:16.106 Latency(us) 00:17:16.106 Device Information : IOPS MiB/s Average min max 00:17:16.106 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6633.55 25.91 9650.61 2374.21 16062.30 00:17:16.106 ======================================================== 00:17:16.106 Total : 6633.55 25.91 9650.61 2374.21 16062.30 00:17:16.106 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.frY9N5uuFJ 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.frY9N5uuFJ 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74348 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74348 /var/tmp/bdevperf.sock 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74348 ']' 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:16.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:16.106 08:54:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:16.106 [2024-09-28 08:54:52.138554] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:16.106 [2024-09-28 08:54:52.138731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74348 ] 00:17:16.106 [2024-09-28 08:54:52.306183] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.106 [2024-09-28 08:54:52.473364] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.106 [2024-09-28 08:54:52.623521] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:16.106 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:16.106 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:16.106 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.frY9N5uuFJ 00:17:16.106 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:16.106 [2024-09-28 08:54:53.654425] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:16.106 TLSTESTn1 00:17:16.106 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:16.106 Running I/O for 10 seconds... 00:17:26.026 2944.00 IOPS, 11.50 MiB/s 3008.00 IOPS, 11.75 MiB/s 2997.67 IOPS, 11.71 MiB/s 3008.00 IOPS, 11.75 MiB/s 3008.60 IOPS, 11.75 MiB/s 3019.33 IOPS, 11.79 MiB/s 3004.29 IOPS, 11.74 MiB/s 2999.25 IOPS, 11.72 MiB/s 2986.78 IOPS, 11.67 MiB/s 2990.20 IOPS, 11.68 MiB/s 00:17:26.026 Latency(us) 00:17:26.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.026 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:26.026 Verification LBA range: start 0x0 length 0x2000 00:17:26.026 TLSTESTn1 : 10.04 2991.50 11.69 0.00 0.00 42674.62 12392.26 29669.93 00:17:26.026 =================================================================================================================== 00:17:26.026 Total : 2991.50 11.69 0.00 0.00 42674.62 12392.26 29669.93 00:17:26.026 { 00:17:26.026 "results": [ 00:17:26.026 { 00:17:26.026 "job": "TLSTESTn1", 00:17:26.026 "core_mask": "0x4", 00:17:26.026 "workload": "verify", 00:17:26.026 "status": "finished", 00:17:26.026 "verify_range": { 00:17:26.026 "start": 0, 00:17:26.026 "length": 8192 00:17:26.026 }, 00:17:26.026 "queue_depth": 128, 00:17:26.026 "io_size": 4096, 00:17:26.026 "runtime": 10.037774, 00:17:26.026 "iops": 2991.4999082465893, 00:17:26.026 "mibps": 11.68554651658824, 00:17:26.026 "io_failed": 0, 00:17:26.026 "io_timeout": 0, 00:17:26.026 "avg_latency_us": 42674.62368456108, 00:17:26.026 "min_latency_us": 12392.261818181818, 00:17:26.026 "max_latency_us": 29669.934545454544 00:17:26.026 } 00:17:26.026 ], 00:17:26.026 "core_count": 1 00:17:26.026 } 00:17:26.026 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:26.026 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 74348 00:17:26.026 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74348 ']' 00:17:26.026 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74348 00:17:26.026 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:26.026 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:26.026 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74348 00:17:26.026 killing process with pid 74348 00:17:26.026 Received shutdown signal, test time was about 10.000000 seconds 00:17:26.026 00:17:26.026 Latency(us) 00:17:26.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.026 =================================================================================================================== 00:17:26.026 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.026 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:26.026 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:26.026 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74348' 00:17:26.026 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74348 00:17:26.026 08:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74348 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Uwzilhycp4 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Uwzilhycp4 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Uwzilhycp4 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Uwzilhycp4 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74500 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74500 /var/tmp/bdevperf.sock 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74500 ']' 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:27.404 08:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.404 [2024-09-28 08:55:05.185359] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:27.404 [2024-09-28 08:55:05.185526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74500 ] 00:17:27.405 [2024-09-28 08:55:05.354068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.663 [2024-09-28 08:55:05.533225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.923 [2024-09-28 08:55:05.704851] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:28.181 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:28.181 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:28.181 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Uwzilhycp4 00:17:28.439 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:28.698 [2024-09-28 08:55:06.585232] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.698 [2024-09-28 08:55:06.599653] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:28.698 [2024-09-28 08:55:06.600285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:28.698 [2024-09-28 08:55:06.601261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:28.698 [2024-09-28 08:55:06.602258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:28.698 [2024-09-28 08:55:06.602303] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:28.698 [2024-09-28 08:55:06.602322] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:28.698 [2024-09-28 08:55:06.602338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:28.698 request: 00:17:28.698 { 00:17:28.698 "name": "TLSTEST", 00:17:28.698 "trtype": "tcp", 00:17:28.698 "traddr": "10.0.0.3", 00:17:28.698 "adrfam": "ipv4", 00:17:28.698 "trsvcid": "4420", 00:17:28.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.698 "prchk_reftag": false, 00:17:28.698 "prchk_guard": false, 00:17:28.698 "hdgst": false, 00:17:28.698 "ddgst": false, 00:17:28.698 "psk": "key0", 00:17:28.698 "allow_unrecognized_csi": false, 00:17:28.698 "method": "bdev_nvme_attach_controller", 00:17:28.698 "req_id": 1 00:17:28.698 } 00:17:28.698 Got JSON-RPC error response 00:17:28.698 response: 00:17:28.698 { 00:17:28.698 "code": -5, 00:17:28.698 "message": "Input/output error" 00:17:28.698 } 00:17:28.698 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74500 00:17:28.698 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74500 ']' 00:17:28.698 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74500 00:17:28.698 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:28.698 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:28.698 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74500 00:17:28.698 killing process with pid 74500 00:17:28.698 Received shutdown signal, test time was about 10.000000 seconds 00:17:28.698 00:17:28.698 Latency(us) 00:17:28.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.698 =================================================================================================================== 00:17:28.698 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:28.698 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:28.698 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:28.698 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74500' 00:17:28.698 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74500 00:17:28.698 08:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74500 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.frY9N5uuFJ 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.frY9N5uuFJ 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.frY9N5uuFJ 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.frY9N5uuFJ 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74542 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74542 /var/tmp/bdevperf.sock 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74542 ']' 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:30.075 08:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.075 [2024-09-28 08:55:07.819863] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:30.075 [2024-09-28 08:55:07.820026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74542 ] 00:17:30.075 [2024-09-28 08:55:07.980713] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.334 [2024-09-28 08:55:08.157105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.334 [2024-09-28 08:55:08.327696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:30.901 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:30.901 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:30.901 08:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.frY9N5uuFJ 00:17:31.159 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:31.418 [2024-09-28 08:55:09.361286] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.418 [2024-09-28 08:55:09.370305] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:31.418 [2024-09-28 08:55:09.370353] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:31.418 [2024-09-28 08:55:09.370433] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:31.418 [2024-09-28 08:55:09.370610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:31.418 [2024-09-28 08:55:09.371555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:31.418 [2024-09-28 08:55:09.372552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:31.418 [2024-09-28 08:55:09.372598] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:31.418 [2024-09-28 08:55:09.372623] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:31.418 [2024-09-28 08:55:09.372639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:31.418 request: 00:17:31.418 { 00:17:31.418 "name": "TLSTEST", 00:17:31.418 "trtype": "tcp", 00:17:31.418 "traddr": "10.0.0.3", 00:17:31.418 "adrfam": "ipv4", 00:17:31.418 "trsvcid": "4420", 00:17:31.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.418 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:31.418 "prchk_reftag": false, 00:17:31.418 "prchk_guard": false, 00:17:31.418 "hdgst": false, 00:17:31.418 "ddgst": false, 00:17:31.418 "psk": "key0", 00:17:31.418 "allow_unrecognized_csi": false, 00:17:31.418 "method": "bdev_nvme_attach_controller", 00:17:31.418 "req_id": 1 00:17:31.418 } 00:17:31.418 Got JSON-RPC error response 00:17:31.418 response: 00:17:31.418 { 00:17:31.418 "code": -5, 00:17:31.418 "message": "Input/output error" 00:17:31.418 } 00:17:31.418 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74542 00:17:31.418 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74542 ']' 00:17:31.418 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74542 00:17:31.418 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:31.418 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:31.418 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74542 00:17:31.677 killing process with pid 74542 00:17:31.677 Received shutdown signal, test time was about 10.000000 seconds 00:17:31.677 00:17:31.677 Latency(us) 00:17:31.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.678 =================================================================================================================== 00:17:31.678 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:31.678 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:31.678 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:31.678 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74542' 00:17:31.678 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74542 00:17:31.678 08:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74542 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.frY9N5uuFJ 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.frY9N5uuFJ 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.frY9N5uuFJ 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.frY9N5uuFJ 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74583 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74583 /var/tmp/bdevperf.sock 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74583 ']' 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:32.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.617 08:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:32.876 [2024-09-28 08:55:10.666904] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:32.876 [2024-09-28 08:55:10.667074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74583 ] 00:17:32.876 [2024-09-28 08:55:10.840165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.136 [2024-09-28 08:55:11.028495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.395 [2024-09-28 08:55:11.203656] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:33.654 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:33.654 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:33.654 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.frY9N5uuFJ 00:17:33.913 08:55:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:34.173 [2024-09-28 08:55:12.112374] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:34.173 [2024-09-28 08:55:12.121680] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:34.173 [2024-09-28 08:55:12.121753] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:34.173 [2024-09-28 08:55:12.121829] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:34.173 [2024-09-28 08:55:12.121912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:34.173 [2024-09-28 08:55:12.122880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:34.173 [2024-09-28 08:55:12.123869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:34.173 [2024-09-28 08:55:12.123919] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:34.173 [2024-09-28 08:55:12.123944] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:34.173 [2024-09-28 08:55:12.123960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:34.173 request: 00:17:34.173 { 00:17:34.173 "name": "TLSTEST", 00:17:34.173 "trtype": "tcp", 00:17:34.173 "traddr": "10.0.0.3", 00:17:34.173 "adrfam": "ipv4", 00:17:34.173 "trsvcid": "4420", 00:17:34.173 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:34.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:34.173 "prchk_reftag": false, 00:17:34.173 "prchk_guard": false, 00:17:34.173 "hdgst": false, 00:17:34.173 "ddgst": false, 00:17:34.173 "psk": "key0", 00:17:34.173 "allow_unrecognized_csi": false, 00:17:34.173 "method": "bdev_nvme_attach_controller", 00:17:34.173 "req_id": 1 00:17:34.173 } 00:17:34.173 Got JSON-RPC error response 00:17:34.173 response: 00:17:34.173 { 00:17:34.173 "code": -5, 00:17:34.173 "message": "Input/output error" 00:17:34.173 } 00:17:34.173 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74583 00:17:34.173 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74583 ']' 00:17:34.173 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74583 00:17:34.173 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:34.173 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:34.173 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74583 00:17:34.432 killing process with pid 74583 00:17:34.432 Received shutdown signal, test time was about 10.000000 seconds 00:17:34.432 00:17:34.432 Latency(us) 00:17:34.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.432 =================================================================================================================== 00:17:34.432 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:34.432 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:34.432 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:34.432 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74583' 00:17:34.432 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74583 00:17:34.432 08:55:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74583 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:35.370 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:35.371 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74618 00:17:35.371 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:35.371 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:35.371 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74618 /var/tmp/bdevperf.sock 00:17:35.371 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74618 ']' 00:17:35.371 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:35.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:35.371 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:35.371 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:35.371 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:35.371 08:55:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.629 [2024-09-28 08:55:13.385140] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:35.629 [2024-09-28 08:55:13.385344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74618 ] 00:17:35.629 [2024-09-28 08:55:13.547397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.888 [2024-09-28 08:55:13.726553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:36.147 [2024-09-28 08:55:13.897072] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:36.405 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:36.405 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:36.405 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:36.665 [2024-09-28 08:55:14.605960] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:36.665 [2024-09-28 08:55:14.606015] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:36.665 request: 00:17:36.665 { 00:17:36.665 "name": "key0", 00:17:36.665 "path": "", 00:17:36.665 "method": "keyring_file_add_key", 00:17:36.665 "req_id": 1 00:17:36.665 } 00:17:36.665 Got JSON-RPC error response 00:17:36.665 response: 00:17:36.665 { 00:17:36.665 "code": -1, 00:17:36.665 "message": "Operation not permitted" 00:17:36.665 } 00:17:36.665 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:36.924 [2024-09-28 08:55:14.906232] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:36.924 [2024-09-28 08:55:14.906318] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:36.924 request: 00:17:36.924 { 00:17:36.924 "name": "TLSTEST", 00:17:36.924 "trtype": "tcp", 00:17:36.924 "traddr": "10.0.0.3", 00:17:36.924 "adrfam": "ipv4", 00:17:36.924 "trsvcid": "4420", 00:17:36.924 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:36.924 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:36.924 "prchk_reftag": false, 00:17:36.924 "prchk_guard": false, 00:17:36.924 "hdgst": false, 00:17:36.924 "ddgst": false, 00:17:36.924 "psk": "key0", 00:17:36.924 "allow_unrecognized_csi": false, 00:17:36.924 "method": "bdev_nvme_attach_controller", 00:17:36.924 "req_id": 1 00:17:36.924 } 00:17:36.924 Got JSON-RPC error response 00:17:36.924 response: 00:17:36.924 { 00:17:36.924 "code": -126, 00:17:36.924 "message": "Required key not available" 00:17:36.924 } 00:17:37.184 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74618 00:17:37.184 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74618 ']' 00:17:37.184 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74618 00:17:37.184 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:37.184 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:37.184 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74618 00:17:37.184 killing process with pid 74618 00:17:37.184 Received shutdown signal, test time was about 10.000000 seconds 00:17:37.184 00:17:37.184 Latency(us) 00:17:37.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.184 =================================================================================================================== 00:17:37.184 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:37.184 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:37.184 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:37.184 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74618' 00:17:37.184 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74618 00:17:37.184 08:55:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74618 00:17:38.125 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:38.125 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:38.125 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:38.125 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:38.125 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:38.125 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 74102 00:17:38.126 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74102 ']' 00:17:38.126 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74102 00:17:38.126 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:38.126 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:38.126 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74102 00:17:38.126 killing process with pid 74102 00:17:38.126 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:38.126 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:38.126 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74102' 00:17:38.126 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74102 00:17:38.126 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74102 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.txqqXu9dzf 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.txqqXu9dzf 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=74686 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 74686 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74686 ']' 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:39.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:39.503 08:55:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:39.503 [2024-09-28 08:55:17.485174] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:39.503 [2024-09-28 08:55:17.485360] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.762 [2024-09-28 08:55:17.658571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.020 [2024-09-28 08:55:17.830876] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.020 [2024-09-28 08:55:17.831010] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.020 [2024-09-28 08:55:17.831048] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.020 [2024-09-28 08:55:17.831067] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.020 [2024-09-28 08:55:17.831081] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.020 [2024-09-28 08:55:17.831122] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.020 [2024-09-28 08:55:17.999258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:40.588 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.588 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:40.588 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:40.588 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:40.588 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.588 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.588 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.txqqXu9dzf 00:17:40.588 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.txqqXu9dzf 00:17:40.588 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:40.848 [2024-09-28 08:55:18.727874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.848 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:41.108 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:41.370 [2024-09-28 08:55:19.280246] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:41.370 [2024-09-28 08:55:19.280603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:41.370 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:41.662 malloc0 00:17:41.662 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:41.922 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.txqqXu9dzf 00:17:42.181 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.txqqXu9dzf 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.txqqXu9dzf 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74742 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74742 /var/tmp/bdevperf.sock 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74742 ']' 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:42.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:42.440 08:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.698 [2024-09-28 08:55:20.515786] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:42.698 [2024-09-28 08:55:20.515990] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74742 ] 00:17:42.698 [2024-09-28 08:55:20.682407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.957 [2024-09-28 08:55:20.897418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.216 [2024-09-28 08:55:21.065281] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:43.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:43.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:43.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.txqqXu9dzf 00:17:43.784 08:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:44.043 [2024-09-28 08:55:22.013688] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:44.302 TLSTESTn1 00:17:44.302 08:55:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:44.302 Running I/O for 10 seconds... 00:17:54.539 3201.00 IOPS, 12.50 MiB/s 3264.00 IOPS, 12.75 MiB/s 3294.33 IOPS, 12.87 MiB/s 3298.25 IOPS, 12.88 MiB/s 3302.40 IOPS, 12.90 MiB/s 3306.67 IOPS, 12.92 MiB/s 3303.71 IOPS, 12.91 MiB/s 3296.00 IOPS, 12.88 MiB/s 3299.56 IOPS, 12.89 MiB/s 3302.40 IOPS, 12.90 MiB/s 00:17:54.539 Latency(us) 00:17:54.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.539 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:54.539 Verification LBA range: start 0x0 length 0x2000 00:17:54.539 TLSTESTn1 : 10.04 3302.69 12.90 0.00 0.00 38680.44 10009.13 25618.62 00:17:54.539 =================================================================================================================== 00:17:54.539 Total : 3302.69 12.90 0.00 0.00 38680.44 10009.13 25618.62 00:17:54.539 { 00:17:54.539 "results": [ 00:17:54.539 { 00:17:54.539 "job": "TLSTESTn1", 00:17:54.539 "core_mask": "0x4", 00:17:54.539 "workload": "verify", 00:17:54.539 "status": "finished", 00:17:54.539 "verify_range": { 00:17:54.539 "start": 0, 00:17:54.539 "length": 8192 00:17:54.539 }, 00:17:54.539 "queue_depth": 128, 00:17:54.539 "io_size": 4096, 00:17:54.539 "runtime": 10.037868, 00:17:54.539 "iops": 3302.693360781393, 00:17:54.539 "mibps": 12.901145940552317, 00:17:54.539 "io_failed": 0, 00:17:54.539 "io_timeout": 0, 00:17:54.539 "avg_latency_us": 38680.43905931906, 00:17:54.539 "min_latency_us": 10009.134545454546, 00:17:54.539 "max_latency_us": 25618.618181818183 00:17:54.539 } 00:17:54.539 ], 00:17:54.539 "core_count": 1 00:17:54.539 } 00:17:54.539 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:54.539 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 74742 00:17:54.539 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74742 ']' 00:17:54.539 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74742 00:17:54.539 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:54.540 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:54.540 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74742 00:17:54.540 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:54.540 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:54.540 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74742' 00:17:54.540 killing process with pid 74742 00:17:54.540 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74742 00:17:54.540 Received shutdown signal, test time was about 10.000000 seconds 00:17:54.540 00:17:54.540 Latency(us) 00:17:54.540 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.540 =================================================================================================================== 00:17:54.540 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:54.540 08:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74742 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.txqqXu9dzf 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.txqqXu9dzf 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.txqqXu9dzf 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.txqqXu9dzf 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.txqqXu9dzf 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74891 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74891 /var/tmp/bdevperf.sock 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74891 ']' 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:55.475 08:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.475 [2024-09-28 08:55:33.432709] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:55.475 [2024-09-28 08:55:33.432937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74891 ] 00:17:55.733 [2024-09-28 08:55:33.596976] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.990 [2024-09-28 08:55:33.761960] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.990 [2024-09-28 08:55:33.918120] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:56.556 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:56.556 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:56.556 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.txqqXu9dzf 00:17:56.815 [2024-09-28 08:55:34.592106] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.txqqXu9dzf': 0100666 00:17:56.815 [2024-09-28 08:55:34.592169] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:56.815 request: 00:17:56.815 { 00:17:56.815 "name": "key0", 00:17:56.815 "path": "/tmp/tmp.txqqXu9dzf", 00:17:56.815 "method": "keyring_file_add_key", 00:17:56.815 "req_id": 1 00:17:56.815 } 00:17:56.815 Got JSON-RPC error response 00:17:56.815 response: 00:17:56.815 { 00:17:56.815 "code": -1, 00:17:56.815 "message": "Operation not permitted" 00:17:56.815 } 00:17:56.815 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:57.073 [2024-09-28 08:55:34.836365] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.074 [2024-09-28 08:55:34.836471] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:57.074 request: 00:17:57.074 { 00:17:57.074 "name": "TLSTEST", 00:17:57.074 "trtype": "tcp", 00:17:57.074 "traddr": "10.0.0.3", 00:17:57.074 "adrfam": "ipv4", 00:17:57.074 "trsvcid": "4420", 00:17:57.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.074 "prchk_reftag": false, 00:17:57.074 "prchk_guard": false, 00:17:57.074 "hdgst": false, 00:17:57.074 "ddgst": false, 00:17:57.074 "psk": "key0", 00:17:57.074 "allow_unrecognized_csi": false, 00:17:57.074 "method": "bdev_nvme_attach_controller", 00:17:57.074 "req_id": 1 00:17:57.074 } 00:17:57.074 Got JSON-RPC error response 00:17:57.074 response: 00:17:57.074 { 00:17:57.074 "code": -126, 00:17:57.074 "message": "Required key not available" 00:17:57.074 } 00:17:57.074 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74891 00:17:57.074 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74891 ']' 00:17:57.074 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74891 00:17:57.074 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:57.074 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.074 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74891 00:17:57.074 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:57.074 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:57.074 killing process with pid 74891 00:17:57.074 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74891' 00:17:57.074 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74891 00:17:57.074 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.074 00:17:57.074 Latency(us) 00:17:57.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.074 =================================================================================================================== 00:17:57.074 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.074 08:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74891 00:17:58.008 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 74686 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74686 ']' 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74686 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74686 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74686' 00:17:58.009 killing process with pid 74686 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74686 00:17:58.009 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74686 00:17:59.384 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:17:59.384 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:59.384 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:59.384 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.384 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=74945 00:17:59.384 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:59.384 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 74945 00:17:59.384 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 74945 ']' 00:17:59.384 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.384 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:59.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.384 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.384 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:59.384 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.384 [2024-09-28 08:55:37.051202] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:59.384 [2024-09-28 08:55:37.051361] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.384 [2024-09-28 08:55:37.206504] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.384 [2024-09-28 08:55:37.371864] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.384 [2024-09-28 08:55:37.371949] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.384 [2024-09-28 08:55:37.371984] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.384 [2024-09-28 08:55:37.372000] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.384 [2024-09-28 08:55:37.372013] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.384 [2024-09-28 08:55:37.372049] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.642 [2024-09-28 08:55:37.532079] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:00.209 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:00.209 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:00.209 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:00.209 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:00.209 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.209 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.209 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.txqqXu9dzf 00:18:00.209 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:18:00.209 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.txqqXu9dzf 00:18:00.209 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:18:00.209 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.209 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:18:00.209 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:00.209 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.txqqXu9dzf 00:18:00.209 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.txqqXu9dzf 00:18:00.209 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:00.467 [2024-09-28 08:55:38.294682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.467 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:00.726 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:00.984 [2024-09-28 08:55:38.806782] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:00.984 [2024-09-28 08:55:38.807431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:00.984 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:01.242 malloc0 00:18:01.242 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:01.499 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.txqqXu9dzf 00:18:01.756 [2024-09-28 08:55:39.530009] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.txqqXu9dzf': 0100666 00:18:01.756 [2024-09-28 08:55:39.530573] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:01.756 request: 00:18:01.756 { 00:18:01.756 "name": "key0", 00:18:01.756 "path": "/tmp/tmp.txqqXu9dzf", 00:18:01.756 "method": "keyring_file_add_key", 00:18:01.756 "req_id": 1 00:18:01.756 } 00:18:01.756 Got JSON-RPC error response 00:18:01.756 response: 00:18:01.756 { 00:18:01.756 "code": -1, 00:18:01.756 "message": "Operation not permitted" 00:18:01.756 } 00:18:01.756 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:01.756 [2024-09-28 08:55:39.750033] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:01.756 [2024-09-28 08:55:39.750297] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:02.015 request: 00:18:02.015 { 00:18:02.015 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.015 "host": "nqn.2016-06.io.spdk:host1", 00:18:02.015 "psk": "key0", 00:18:02.015 "method": "nvmf_subsystem_add_host", 00:18:02.015 "req_id": 1 00:18:02.015 } 00:18:02.015 Got JSON-RPC error response 00:18:02.015 response: 00:18:02.015 { 00:18:02.015 "code": -32603, 00:18:02.015 "message": "Internal error" 00:18:02.015 } 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 74945 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 74945 ']' 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 74945 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74945 00:18:02.015 killing process with pid 74945 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74945' 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 74945 00:18:02.015 08:55:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 74945 00:18:02.951 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.txqqXu9dzf 00:18:02.951 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:02.951 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:02.951 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:02.951 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.951 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=75025 00:18:02.951 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:02.951 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 75025 00:18:02.951 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75025 ']' 00:18:02.951 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.951 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.951 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.951 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.951 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.210 [2024-09-28 08:55:40.949302] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:03.210 [2024-09-28 08:55:40.950001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.210 [2024-09-28 08:55:41.110620] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.468 [2024-09-28 08:55:41.269089] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.468 [2024-09-28 08:55:41.269351] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.468 [2024-09-28 08:55:41.269483] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.468 [2024-09-28 08:55:41.269575] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.468 [2024-09-28 08:55:41.269656] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.468 [2024-09-28 08:55:41.269798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.468 [2024-09-28 08:55:41.430742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:04.034 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:04.034 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:04.034 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:04.034 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:04.034 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.034 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.034 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.txqqXu9dzf 00:18:04.034 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.txqqXu9dzf 00:18:04.034 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:04.292 [2024-09-28 08:55:42.235195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.292 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:04.552 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:04.810 [2024-09-28 08:55:42.775303] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:04.810 [2024-09-28 08:55:42.775792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:04.810 08:55:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:05.068 malloc0 00:18:05.326 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:05.584 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.txqqXu9dzf 00:18:05.584 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:05.854 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=75081 00:18:05.854 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:05.854 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:05.854 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 75081 /var/tmp/bdevperf.sock 00:18:05.854 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75081 ']' 00:18:05.854 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:05.854 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:05.854 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:05.854 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.854 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.126 [2024-09-28 08:55:43.886408] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:06.126 [2024-09-28 08:55:43.886553] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75081 ] 00:18:06.126 [2024-09-28 08:55:44.050942] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.384 [2024-09-28 08:55:44.262326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.642 [2024-09-28 08:55:44.422064] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:06.901 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:06.901 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:06.901 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.txqqXu9dzf 00:18:07.159 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:07.418 [2024-09-28 08:55:45.233271] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:07.418 TLSTESTn1 00:18:07.418 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:18:07.677 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:07.677 "subsystems": [ 00:18:07.677 { 00:18:07.677 "subsystem": "keyring", 00:18:07.677 "config": [ 00:18:07.677 { 00:18:07.677 "method": "keyring_file_add_key", 00:18:07.677 "params": { 00:18:07.677 "name": "key0", 00:18:07.677 "path": "/tmp/tmp.txqqXu9dzf" 00:18:07.677 } 00:18:07.678 } 00:18:07.678 ] 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "subsystem": "iobuf", 00:18:07.678 "config": [ 00:18:07.678 { 00:18:07.678 "method": "iobuf_set_options", 00:18:07.678 "params": { 00:18:07.678 "small_pool_count": 8192, 00:18:07.678 "large_pool_count": 1024, 00:18:07.678 "small_bufsize": 8192, 00:18:07.678 "large_bufsize": 135168 00:18:07.678 } 00:18:07.678 } 00:18:07.678 ] 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "subsystem": "sock", 00:18:07.678 "config": [ 00:18:07.678 { 00:18:07.678 "method": "sock_set_default_impl", 00:18:07.678 "params": { 00:18:07.678 "impl_name": "uring" 00:18:07.678 } 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "method": "sock_impl_set_options", 00:18:07.678 "params": { 00:18:07.678 "impl_name": "ssl", 00:18:07.678 "recv_buf_size": 4096, 00:18:07.678 "send_buf_size": 4096, 00:18:07.678 "enable_recv_pipe": true, 00:18:07.678 "enable_quickack": false, 00:18:07.678 "enable_placement_id": 0, 00:18:07.678 "enable_zerocopy_send_server": true, 00:18:07.678 "enable_zerocopy_send_client": false, 00:18:07.678 "zerocopy_threshold": 0, 00:18:07.678 "tls_version": 0, 00:18:07.678 "enable_ktls": false 00:18:07.678 } 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "method": "sock_impl_set_options", 00:18:07.678 "params": { 00:18:07.678 "impl_name": "posix", 00:18:07.678 "recv_buf_size": 2097152, 00:18:07.678 "send_buf_size": 2097152, 00:18:07.678 "enable_recv_pipe": true, 00:18:07.678 "enable_quickack": false, 00:18:07.678 "enable_placement_id": 0, 00:18:07.678 "enable_zerocopy_send_server": true, 00:18:07.678 "enable_zerocopy_send_client": false, 00:18:07.678 "zerocopy_threshold": 0, 00:18:07.678 "tls_version": 0, 00:18:07.678 "enable_ktls": false 00:18:07.678 } 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "method": "sock_impl_set_options", 00:18:07.678 "params": { 00:18:07.678 "impl_name": "uring", 00:18:07.678 "recv_buf_size": 2097152, 00:18:07.678 "send_buf_size": 2097152, 00:18:07.678 "enable_recv_pipe": true, 00:18:07.678 "enable_quickack": false, 00:18:07.678 "enable_placement_id": 0, 00:18:07.678 "enable_zerocopy_send_server": false, 00:18:07.678 "enable_zerocopy_send_client": false, 00:18:07.678 "zerocopy_threshold": 0, 00:18:07.678 "tls_version": 0, 00:18:07.678 "enable_ktls": false 00:18:07.678 } 00:18:07.678 } 00:18:07.678 ] 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "subsystem": "vmd", 00:18:07.678 "config": [] 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "subsystem": "accel", 00:18:07.678 "config": [ 00:18:07.678 { 00:18:07.678 "method": "accel_set_options", 00:18:07.678 "params": { 00:18:07.678 "small_cache_size": 128, 00:18:07.678 "large_cache_size": 16, 00:18:07.678 "task_count": 2048, 00:18:07.678 "sequence_count": 2048, 00:18:07.678 "buf_count": 2048 00:18:07.678 } 00:18:07.678 } 00:18:07.678 ] 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "subsystem": "bdev", 00:18:07.678 "config": [ 00:18:07.678 { 00:18:07.678 "method": "bdev_set_options", 00:18:07.678 "params": { 00:18:07.678 "bdev_io_pool_size": 65535, 00:18:07.678 "bdev_io_cache_size": 256, 00:18:07.678 "bdev_auto_examine": true, 00:18:07.678 "iobuf_small_cache_size": 128, 00:18:07.678 "iobuf_large_cache_size": 16 00:18:07.678 } 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "method": "bdev_raid_set_options", 00:18:07.678 "params": { 00:18:07.678 "process_window_size_kb": 1024, 00:18:07.678 "process_max_bandwidth_mb_sec": 0 00:18:07.678 } 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "method": "bdev_iscsi_set_options", 00:18:07.678 "params": { 00:18:07.678 "timeout_sec": 30 00:18:07.678 } 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "method": "bdev_nvme_set_options", 00:18:07.678 "params": { 00:18:07.678 "action_on_timeout": "none", 00:18:07.678 "timeout_us": 0, 00:18:07.678 "timeout_admin_us": 0, 00:18:07.678 "keep_alive_timeout_ms": 10000, 00:18:07.678 "arbitration_burst": 0, 00:18:07.678 "low_priority_weight": 0, 00:18:07.678 "medium_priority_weight": 0, 00:18:07.678 "high_priority_weight": 0, 00:18:07.678 "nvme_adminq_poll_period_us": 10000, 00:18:07.678 "nvme_ioq_poll_period_us": 0, 00:18:07.678 "io_queue_requests": 0, 00:18:07.678 "delay_cmd_submit": true, 00:18:07.678 "transport_retry_count": 4, 00:18:07.678 "bdev_retry_count": 3, 00:18:07.678 "transport_ack_timeout": 0, 00:18:07.678 "ctrlr_loss_timeout_sec": 0, 00:18:07.678 "reconnect_delay_sec": 0, 00:18:07.678 "fast_io_fail_timeout_sec": 0, 00:18:07.678 "disable_auto_failback": false, 00:18:07.678 "generate_uuids": false, 00:18:07.678 "transport_tos": 0, 00:18:07.678 "nvme_error_stat": false, 00:18:07.678 "rdma_srq_size": 0, 00:18:07.678 "io_path_stat": false, 00:18:07.678 "allow_accel_sequence": false, 00:18:07.678 "rdma_max_cq_size": 0, 00:18:07.678 "rdma_cm_event_timeout_ms": 0, 00:18:07.678 "dhchap_digests": [ 00:18:07.678 "sha256", 00:18:07.678 "sha384", 00:18:07.678 "sha512" 00:18:07.678 ], 00:18:07.678 "dhchap_dhgroups": [ 00:18:07.678 "null", 00:18:07.678 "ffdhe2048", 00:18:07.678 "ffdhe3072", 00:18:07.678 "ffdhe4096", 00:18:07.678 "ffdhe6144", 00:18:07.678 "ffdhe8192" 00:18:07.678 ] 00:18:07.678 } 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "method": "bdev_nvme_set_hotplug", 00:18:07.678 "params": { 00:18:07.678 "period_us": 100000, 00:18:07.678 "enable": false 00:18:07.678 } 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "method": "bdev_malloc_create", 00:18:07.678 "params": { 00:18:07.678 "name": "malloc0", 00:18:07.678 "num_blocks": 8192, 00:18:07.678 "block_size": 4096, 00:18:07.678 "physical_block_size": 4096, 00:18:07.678 "uuid": "d7fd39d0-9e2a-4fde-be2e-48c4d6eaa8bb", 00:18:07.678 "optimal_io_boundary": 0, 00:18:07.678 "md_size": 0, 00:18:07.678 "dif_type": 0, 00:18:07.678 "dif_is_head_of_md": false, 00:18:07.678 "dif_pi_format": 0 00:18:07.678 } 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "method": "bdev_wait_for_examine" 00:18:07.678 } 00:18:07.678 ] 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "subsystem": "nbd", 00:18:07.678 "config": [] 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "subsystem": "scheduler", 00:18:07.678 "config": [ 00:18:07.678 { 00:18:07.678 "method": "framework_set_scheduler", 00:18:07.678 "params": { 00:18:07.678 "name": "static" 00:18:07.678 } 00:18:07.678 } 00:18:07.678 ] 00:18:07.678 }, 00:18:07.678 { 00:18:07.678 "subsystem": "nvmf", 00:18:07.678 "config": [ 00:18:07.678 { 00:18:07.679 "method": "nvmf_set_config", 00:18:07.679 "params": { 00:18:07.679 "discovery_filter": "match_any", 00:18:07.679 "admin_cmd_passthru": { 00:18:07.679 "identify_ctrlr": false 00:18:07.679 }, 00:18:07.679 "dhchap_digests": [ 00:18:07.679 "sha256", 00:18:07.679 "sha384", 00:18:07.679 "sha512" 00:18:07.679 ], 00:18:07.679 "dhchap_dhgroups": [ 00:18:07.679 "null", 00:18:07.679 "ffdhe2048", 00:18:07.679 "ffdhe3072", 00:18:07.679 "ffdhe4096", 00:18:07.679 "ffdhe6144", 00:18:07.679 "ffdhe8192" 00:18:07.679 ] 00:18:07.679 } 00:18:07.679 }, 00:18:07.679 { 00:18:07.679 "method": "nvmf_set_max_subsystems", 00:18:07.679 "params": { 00:18:07.679 "max_subsystems": 1024 00:18:07.679 } 00:18:07.679 }, 00:18:07.679 { 00:18:07.679 "method": "nvmf_set_crdt", 00:18:07.679 "params": { 00:18:07.679 "crdt1": 0, 00:18:07.679 "crdt2": 0, 00:18:07.679 "crdt3": 0 00:18:07.679 } 00:18:07.679 }, 00:18:07.679 { 00:18:07.679 "method": "nvmf_create_transport", 00:18:07.679 "params": { 00:18:07.679 "trtype": "TCP", 00:18:07.679 "max_queue_depth": 128, 00:18:07.679 "max_io_qpairs_per_ctrlr": 127, 00:18:07.679 "in_capsule_data_size": 4096, 00:18:07.679 "max_io_size": 131072, 00:18:07.679 "io_unit_size": 131072, 00:18:07.679 "max_aq_depth": 128, 00:18:07.679 "num_shared_buffers": 511, 00:18:07.679 "buf_cache_size": 4294967295, 00:18:07.679 "dif_insert_or_strip": false, 00:18:07.679 "zcopy": false, 00:18:07.679 "c2h_success": false, 00:18:07.679 "sock_priority": 0, 00:18:07.679 "abort_timeout_sec": 1, 00:18:07.679 "ack_timeout": 0, 00:18:07.679 "data_wr_pool_size": 0 00:18:07.679 } 00:18:07.679 }, 00:18:07.679 { 00:18:07.679 "method": "nvmf_create_subsystem", 00:18:07.679 "params": { 00:18:07.679 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.679 "allow_any_host": false, 00:18:07.679 "serial_number": "SPDK00000000000001", 00:18:07.679 "model_number": "SPDK bdev Controller", 00:18:07.679 "max_namespaces": 10, 00:18:07.679 "min_cntlid": 1, 00:18:07.679 "max_cntlid": 65519, 00:18:07.679 "ana_reporting": false 00:18:07.679 } 00:18:07.679 }, 00:18:07.679 { 00:18:07.679 "method": "nvmf_subsystem_add_host", 00:18:07.679 "params": { 00:18:07.679 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.679 "host": "nqn.2016-06.io.spdk:host1", 00:18:07.679 "psk": "key0" 00:18:07.679 } 00:18:07.679 }, 00:18:07.679 { 00:18:07.679 "method": "nvmf_subsystem_add_ns", 00:18:07.679 "params": { 00:18:07.679 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.679 "namespace": { 00:18:07.679 "nsid": 1, 00:18:07.679 "bdev_name": "malloc0", 00:18:07.679 "nguid": "D7FD39D09E2A4FDEBE2E48C4D6EAA8BB", 00:18:07.679 "uuid": "d7fd39d0-9e2a-4fde-be2e-48c4d6eaa8bb", 00:18:07.679 "no_auto_visible": false 00:18:07.679 } 00:18:07.679 } 00:18:07.679 }, 00:18:07.679 { 00:18:07.679 "method": "nvmf_subsystem_add_listener", 00:18:07.679 "params": { 00:18:07.679 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.679 "listen_address": { 00:18:07.679 "trtype": "TCP", 00:18:07.679 "adrfam": "IPv4", 00:18:07.679 "traddr": "10.0.0.3", 00:18:07.679 "trsvcid": "4420" 00:18:07.679 }, 00:18:07.679 "secure_channel": true 00:18:07.679 } 00:18:07.679 } 00:18:07.679 ] 00:18:07.679 } 00:18:07.679 ] 00:18:07.679 }' 00:18:07.679 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:08.246 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:08.246 "subsystems": [ 00:18:08.246 { 00:18:08.246 "subsystem": "keyring", 00:18:08.246 "config": [ 00:18:08.246 { 00:18:08.246 "method": "keyring_file_add_key", 00:18:08.246 "params": { 00:18:08.246 "name": "key0", 00:18:08.246 "path": "/tmp/tmp.txqqXu9dzf" 00:18:08.246 } 00:18:08.246 } 00:18:08.246 ] 00:18:08.246 }, 00:18:08.246 { 00:18:08.246 "subsystem": "iobuf", 00:18:08.246 "config": [ 00:18:08.246 { 00:18:08.246 "method": "iobuf_set_options", 00:18:08.246 "params": { 00:18:08.246 "small_pool_count": 8192, 00:18:08.246 "large_pool_count": 1024, 00:18:08.246 "small_bufsize": 8192, 00:18:08.246 "large_bufsize": 135168 00:18:08.246 } 00:18:08.246 } 00:18:08.246 ] 00:18:08.246 }, 00:18:08.246 { 00:18:08.246 "subsystem": "sock", 00:18:08.246 "config": [ 00:18:08.246 { 00:18:08.246 "method": "sock_set_default_impl", 00:18:08.246 "params": { 00:18:08.246 "impl_name": "uring" 00:18:08.246 } 00:18:08.246 }, 00:18:08.246 { 00:18:08.246 "method": "sock_impl_set_options", 00:18:08.246 "params": { 00:18:08.246 "impl_name": "ssl", 00:18:08.246 "recv_buf_size": 4096, 00:18:08.246 "send_buf_size": 4096, 00:18:08.246 "enable_recv_pipe": true, 00:18:08.246 "enable_quickack": false, 00:18:08.246 "enable_placement_id": 0, 00:18:08.246 "enable_zerocopy_send_server": true, 00:18:08.246 "enable_zerocopy_send_client": false, 00:18:08.246 "zerocopy_threshold": 0, 00:18:08.246 "tls_version": 0, 00:18:08.246 "enable_ktls": false 00:18:08.246 } 00:18:08.246 }, 00:18:08.246 { 00:18:08.246 "method": "sock_impl_set_options", 00:18:08.246 "params": { 00:18:08.246 "impl_name": "posix", 00:18:08.246 "recv_buf_size": 2097152, 00:18:08.246 "send_buf_size": 2097152, 00:18:08.246 "enable_recv_pipe": true, 00:18:08.246 "enable_quickack": false, 00:18:08.246 "enable_placement_id": 0, 00:18:08.246 "enable_zerocopy_send_server": true, 00:18:08.246 "enable_zerocopy_send_client": false, 00:18:08.246 "zerocopy_threshold": 0, 00:18:08.246 "tls_version": 0, 00:18:08.246 "enable_ktls": false 00:18:08.246 } 00:18:08.246 }, 00:18:08.246 { 00:18:08.246 "method": "sock_impl_set_options", 00:18:08.246 "params": { 00:18:08.246 "impl_name": "uring", 00:18:08.246 "recv_buf_size": 2097152, 00:18:08.246 "send_buf_size": 2097152, 00:18:08.246 "enable_recv_pipe": true, 00:18:08.246 "enable_quickack": false, 00:18:08.246 "enable_placement_id": 0, 00:18:08.246 "enable_zerocopy_send_server": false, 00:18:08.246 "enable_zerocopy_send_client": false, 00:18:08.246 "zerocopy_threshold": 0, 00:18:08.246 "tls_version": 0, 00:18:08.246 "enable_ktls": false 00:18:08.246 } 00:18:08.246 } 00:18:08.246 ] 00:18:08.246 }, 00:18:08.246 { 00:18:08.246 "subsystem": "vmd", 00:18:08.246 "config": [] 00:18:08.246 }, 00:18:08.246 { 00:18:08.246 "subsystem": "accel", 00:18:08.246 "config": [ 00:18:08.246 { 00:18:08.246 "method": "accel_set_options", 00:18:08.246 "params": { 00:18:08.246 "small_cache_size": 128, 00:18:08.246 "large_cache_size": 16, 00:18:08.246 "task_count": 2048, 00:18:08.246 "sequence_count": 2048, 00:18:08.246 "buf_count": 2048 00:18:08.246 } 00:18:08.246 } 00:18:08.246 ] 00:18:08.246 }, 00:18:08.246 { 00:18:08.246 "subsystem": "bdev", 00:18:08.246 "config": [ 00:18:08.246 { 00:18:08.246 "method": "bdev_set_options", 00:18:08.246 "params": { 00:18:08.246 "bdev_io_pool_size": 65535, 00:18:08.246 "bdev_io_cache_size": 256, 00:18:08.246 "bdev_auto_examine": true, 00:18:08.246 "iobuf_small_cache_size": 128, 00:18:08.246 "iobuf_large_cache_size": 16 00:18:08.246 } 00:18:08.246 }, 00:18:08.246 { 00:18:08.246 "method": "bdev_raid_set_options", 00:18:08.246 "params": { 00:18:08.246 "process_window_size_kb": 1024, 00:18:08.246 "process_max_bandwidth_mb_sec": 0 00:18:08.246 } 00:18:08.246 }, 00:18:08.246 { 00:18:08.246 "method": "bdev_iscsi_set_options", 00:18:08.246 "params": { 00:18:08.246 "timeout_sec": 30 00:18:08.246 } 00:18:08.246 }, 00:18:08.246 { 00:18:08.246 "method": "bdev_nvme_set_options", 00:18:08.246 "params": { 00:18:08.246 "action_on_timeout": "none", 00:18:08.246 "timeout_us": 0, 00:18:08.246 "timeout_admin_us": 0, 00:18:08.246 "keep_alive_timeout_ms": 10000, 00:18:08.246 "arbitration_burst": 0, 00:18:08.246 "low_priority_weight": 0, 00:18:08.246 "medium_priority_weight": 0, 00:18:08.246 "high_priority_weight": 0, 00:18:08.246 "nvme_adminq_poll_period_us": 10000, 00:18:08.246 "nvme_ioq_poll_period_us": 0, 00:18:08.246 "io_queue_requests": 512, 00:18:08.246 "delay_cmd_submit": true, 00:18:08.246 "transport_retry_count": 4, 00:18:08.246 "bdev_retry_count": 3, 00:18:08.247 "transport_ack_timeout": 0, 00:18:08.247 "ctrlr_loss_timeout_sec": 0, 00:18:08.247 "reconnect_delay_sec": 0, 00:18:08.247 "fast_io_fail_timeout_sec": 0, 00:18:08.247 "disable_auto_failback": false, 00:18:08.247 "generate_uuids": false, 00:18:08.247 "transport_tos": 0, 00:18:08.247 "nvme_error_stat": false, 00:18:08.247 "rdma_srq_size": 0, 00:18:08.247 "io_path_stat": false, 00:18:08.247 "allow_accel_sequence": false, 00:18:08.247 "rdma_max_cq_size": 0, 00:18:08.247 "rdma_cm_event_timeout_ms": 0, 00:18:08.247 "dhchap_digests": [ 00:18:08.247 "sha256", 00:18:08.247 "sha384", 00:18:08.247 "sha512" 00:18:08.247 ], 00:18:08.247 "dhchap_dhgroups": [ 00:18:08.247 "null", 00:18:08.247 "ffdhe2048", 00:18:08.247 "ffdhe3072", 00:18:08.247 "ffdhe4096", 00:18:08.247 "ffdhe6144", 00:18:08.247 "ffdhe8192" 00:18:08.247 ] 00:18:08.247 } 00:18:08.247 }, 00:18:08.247 { 00:18:08.247 "method": "bdev_nvme_attach_controller", 00:18:08.247 "params": { 00:18:08.247 "name": "TLSTEST", 00:18:08.247 "trtype": "TCP", 00:18:08.247 "adrfam": "IPv4", 00:18:08.247 "traddr": "10.0.0.3", 00:18:08.247 "trsvcid": "4420", 00:18:08.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.247 "prchk_reftag": false, 00:18:08.247 "prchk_guard": false, 00:18:08.247 "ctrlr_loss_timeout_sec": 0, 00:18:08.247 "reconnect_delay_sec": 0, 00:18:08.247 "fast_io_fail_timeout_sec": 0, 00:18:08.247 "psk": "key0", 00:18:08.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.247 "hdgst": false, 00:18:08.247 "ddgst": false 00:18:08.247 } 00:18:08.247 }, 00:18:08.247 { 00:18:08.247 "method": "bdev_nvme_set_hotplug", 00:18:08.247 "params": { 00:18:08.247 "period_us": 100000, 00:18:08.247 "enable": false 00:18:08.247 } 00:18:08.247 }, 00:18:08.247 { 00:18:08.247 "method": "bdev_wait_for_examine" 00:18:08.247 } 00:18:08.247 ] 00:18:08.247 }, 00:18:08.247 { 00:18:08.247 "subsystem": "nbd", 00:18:08.247 "config": [] 00:18:08.247 } 00:18:08.247 ] 00:18:08.247 }' 00:18:08.247 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 75081 00:18:08.247 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75081 ']' 00:18:08.247 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75081 00:18:08.247 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:08.247 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:08.247 08:55:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75081 00:18:08.247 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:08.247 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:08.247 killing process with pid 75081 00:18:08.247 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75081' 00:18:08.247 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.247 00:18:08.247 Latency(us) 00:18:08.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.247 =================================================================================================================== 00:18:08.247 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:08.247 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75081 00:18:08.247 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75081 00:18:09.183 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 75025 00:18:09.183 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75025 ']' 00:18:09.183 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75025 00:18:09.183 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:09.183 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:09.183 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75025 00:18:09.183 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:09.183 killing process with pid 75025 00:18:09.183 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:09.183 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75025' 00:18:09.183 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75025 00:18:09.183 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75025 00:18:10.120 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:10.120 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:10.120 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:10.120 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:10.120 "subsystems": [ 00:18:10.120 { 00:18:10.120 "subsystem": "keyring", 00:18:10.120 "config": [ 00:18:10.120 { 00:18:10.120 "method": "keyring_file_add_key", 00:18:10.120 "params": { 00:18:10.120 "name": "key0", 00:18:10.120 "path": "/tmp/tmp.txqqXu9dzf" 00:18:10.120 } 00:18:10.120 } 00:18:10.120 ] 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "subsystem": "iobuf", 00:18:10.120 "config": [ 00:18:10.120 { 00:18:10.120 "method": "iobuf_set_options", 00:18:10.120 "params": { 00:18:10.120 "small_pool_count": 8192, 00:18:10.120 "large_pool_count": 1024, 00:18:10.120 "small_bufsize": 8192, 00:18:10.120 "large_bufsize": 135168 00:18:10.120 } 00:18:10.120 } 00:18:10.120 ] 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "subsystem": "sock", 00:18:10.120 "config": [ 00:18:10.120 { 00:18:10.120 "method": "sock_set_default_impl", 00:18:10.120 "params": { 00:18:10.120 "impl_name": "uring" 00:18:10.120 } 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "method": "sock_impl_set_options", 00:18:10.120 "params": { 00:18:10.120 "impl_name": "ssl", 00:18:10.120 "recv_buf_size": 4096, 00:18:10.120 "send_buf_size": 4096, 00:18:10.120 "enable_recv_pipe": true, 00:18:10.120 "enable_quickack": false, 00:18:10.120 "enable_placement_id": 0, 00:18:10.120 "enable_zerocopy_send_server": true, 00:18:10.120 "enable_zerocopy_send_client": false, 00:18:10.120 "zerocopy_threshold": 0, 00:18:10.120 "tls_version": 0, 00:18:10.120 "enable_ktls": false 00:18:10.120 } 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "method": "sock_impl_set_options", 00:18:10.120 "params": { 00:18:10.120 "impl_name": "posix", 00:18:10.120 "recv_buf_size": 2097152, 00:18:10.120 "send_buf_size": 2097152, 00:18:10.120 "enable_recv_pipe": true, 00:18:10.120 "enable_quickack": false, 00:18:10.120 "enable_placement_id": 0, 00:18:10.120 "enable_zerocopy_send_server": true, 00:18:10.120 "enable_zerocopy_send_client": false, 00:18:10.120 "zerocopy_threshold": 0, 00:18:10.120 "tls_version": 0, 00:18:10.120 "enable_ktls": false 00:18:10.120 } 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "method": "sock_impl_set_options", 00:18:10.120 "params": { 00:18:10.120 "impl_name": "uring", 00:18:10.120 "recv_buf_size": 2097152, 00:18:10.120 "send_buf_size": 2097152, 00:18:10.120 "enable_recv_pipe": true, 00:18:10.120 "enable_quickack": false, 00:18:10.120 "enable_placement_id": 0, 00:18:10.120 "enable_zerocopy_send_server": false, 00:18:10.120 "enable_zerocopy_send_client": false, 00:18:10.120 "zerocopy_threshold": 0, 00:18:10.120 "tls_version": 0, 00:18:10.120 "enable_ktls": false 00:18:10.120 } 00:18:10.120 } 00:18:10.120 ] 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "subsystem": "vmd", 00:18:10.120 "config": [] 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "subsystem": "accel", 00:18:10.120 "config": [ 00:18:10.120 { 00:18:10.120 "method": "accel_set_options", 00:18:10.120 "params": { 00:18:10.120 "small_cache_size": 128, 00:18:10.120 "large_cache_size": 16, 00:18:10.120 "task_count": 2048, 00:18:10.120 "sequence_count": 2048, 00:18:10.120 "buf_count": 2048 00:18:10.120 } 00:18:10.120 } 00:18:10.120 ] 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "subsystem": "bdev", 00:18:10.120 "config": [ 00:18:10.120 { 00:18:10.120 "method": "bdev_set_options", 00:18:10.120 "params": { 00:18:10.120 "bdev_io_pool_size": 65535, 00:18:10.120 "bdev_io_cache_size": 256, 00:18:10.120 "bdev_auto_examine": true, 00:18:10.120 "iobuf_small_cache_size": 128, 00:18:10.120 "iobuf_large_cache_size": 16 00:18:10.120 } 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "method": "bdev_raid_set_options", 00:18:10.120 "params": { 00:18:10.120 "process_window_size_kb": 1024, 00:18:10.120 "process_max_bandwidth_mb_sec": 0 00:18:10.120 } 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "method": "bdev_iscsi_set_options", 00:18:10.120 "params": { 00:18:10.120 "timeout_sec": 30 00:18:10.120 } 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "method": "bdev_nvme_set_options", 00:18:10.120 "params": { 00:18:10.120 "action_on_timeout": "none", 00:18:10.120 "timeout_us": 0, 00:18:10.120 "timeout_admin_us": 0, 00:18:10.120 "keep_alive_timeout_ms": 10000, 00:18:10.120 "arbitration_burst": 0, 00:18:10.120 "low_priority_weight": 0, 00:18:10.120 "medium_priority_weight": 0, 00:18:10.120 "high_priority_weight": 0, 00:18:10.120 "nvme_adminq_poll_period_us": 10000, 00:18:10.120 "nvme_ioq_poll_period_us": 0, 00:18:10.120 "io_queue_requests": 0, 00:18:10.120 "delay_cmd_submit": true, 00:18:10.120 "transport_retry_count": 4, 00:18:10.120 "bdev_retry_count": 3, 00:18:10.120 "transport_ack_timeout": 0, 00:18:10.120 "ctrlr_loss_timeout_sec": 0, 00:18:10.120 "reconnect_delay_sec": 0, 00:18:10.120 "fast_io_fail_timeout_sec": 0, 00:18:10.120 "disable_auto_failback": false, 00:18:10.120 "generate_uuids": false, 00:18:10.120 "transport_tos": 0, 00:18:10.120 "nvme_error_stat": false, 00:18:10.120 "rdma_srq_size": 0, 00:18:10.120 "io_path_stat": false, 00:18:10.120 "allow_accel_sequence": false, 00:18:10.120 "rdma_max_cq_size": 0, 00:18:10.120 "rdma_cm_event_timeout_ms": 0, 00:18:10.120 "dhchap_digests": [ 00:18:10.120 "sha256", 00:18:10.120 "sha384", 00:18:10.120 "sha512" 00:18:10.120 ], 00:18:10.120 "dhchap_dhgroups": [ 00:18:10.120 "null", 00:18:10.120 "ffdhe2048", 00:18:10.120 "ffdhe3072", 00:18:10.120 "ffdhe4096", 00:18:10.120 "ffdhe6144", 00:18:10.120 "ffdhe8192" 00:18:10.120 ] 00:18:10.120 } 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "method": "bdev_nvme_set_hotplug", 00:18:10.120 "params": { 00:18:10.120 "period_us": 100000, 00:18:10.120 "enable": false 00:18:10.120 } 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "method": "bdev_malloc_create", 00:18:10.120 "params": { 00:18:10.120 "name": "malloc0", 00:18:10.120 "num_blocks": 8192, 00:18:10.120 "block_size": 4096, 00:18:10.120 "physical_block_size": 4096, 00:18:10.120 "uuid": "d7fd39d0-9e2a-4fde-be2e-48c4d6eaa8bb", 00:18:10.120 "optimal_io_boundary": 0, 00:18:10.120 "md_size": 0, 00:18:10.120 "dif_type": 0, 00:18:10.120 "dif_is_head_of_md": false, 00:18:10.120 "dif_pi_format": 0 00:18:10.120 } 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "method": "bdev_wait_for_examine" 00:18:10.120 } 00:18:10.120 ] 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "subsystem": "nbd", 00:18:10.120 "config": [] 00:18:10.120 }, 00:18:10.120 { 00:18:10.120 "subsystem": "scheduler", 00:18:10.120 "config": [ 00:18:10.120 { 00:18:10.120 "method": "framework_set_scheduler", 00:18:10.120 "params": { 00:18:10.120 "name": "static" 00:18:10.120 } 00:18:10.120 } 00:18:10.121 ] 00:18:10.121 }, 00:18:10.121 { 00:18:10.121 "subsystem": "nvmf", 00:18:10.121 "config": [ 00:18:10.121 { 00:18:10.121 "method": "nvmf_set_config", 00:18:10.121 "params": { 00:18:10.121 "discovery_filter": "match_any", 00:18:10.121 "admin_cmd_passthru": { 00:18:10.121 "identify_ctrlr": false 00:18:10.121 }, 00:18:10.121 "dhchap_digests": [ 00:18:10.121 "sha256", 00:18:10.121 "sha384", 00:18:10.121 "sha512" 00:18:10.121 ], 00:18:10.121 "dhchap_dhgroups": [ 00:18:10.121 "null", 00:18:10.121 "ffdhe2048", 00:18:10.121 "ffdhe3072", 00:18:10.121 "ffdhe4096", 00:18:10.121 "ffdhe6144", 00:18:10.121 "ffdhe8192" 00:18:10.121 ] 00:18:10.121 } 00:18:10.121 }, 00:18:10.121 { 00:18:10.121 "method": "nvmf_set_max_subsystems", 00:18:10.121 "params": { 00:18:10.121 "max_subsystems": 1024 00:18:10.121 } 00:18:10.121 }, 00:18:10.121 { 00:18:10.121 "method": "nvmf_set_crdt", 00:18:10.121 "params": { 00:18:10.121 "crdt1": 0, 00:18:10.121 "crdt2": 0, 00:18:10.121 "crdt3": 0 00:18:10.121 } 00:18:10.121 }, 00:18:10.121 { 00:18:10.121 "method": "nvmf_create_transport", 00:18:10.121 "params": { 00:18:10.121 "trtype": "TCP", 00:18:10.121 "max_queue_depth": 128, 00:18:10.121 "max_io_qpairs_per_ctrlr": 127, 00:18:10.121 "in_capsule_data_size": 4096, 00:18:10.121 "max_io_size": 131072, 00:18:10.121 "io_unit_size": 131072, 00:18:10.121 "max_aq_depth": 128, 00:18:10.121 "num_shared_buffers": 511, 00:18:10.121 "buf_cache_size": 4294967295, 00:18:10.121 "dif_insert_or_strip": false, 00:18:10.121 "zcopy": false, 00:18:10.121 "c2h_success": false, 00:18:10.121 "sock_priority": 0, 00:18:10.121 "abort_timeout_sec": 1, 00:18:10.121 "ack_timeout": 0, 00:18:10.121 "data_wr_pool_size": 0 00:18:10.121 } 00:18:10.121 }, 00:18:10.121 { 00:18:10.121 "method": "nvmf_create_subsystem", 00:18:10.121 "params": { 00:18:10.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.121 "allow_any_host": false, 00:18:10.121 "serial_number": "SPDK00000000000001", 00:18:10.121 "model_number": "SPDK bdev Controller", 00:18:10.121 "max_namespaces": 10, 00:18:10.121 "min_cntlid": 1, 00:18:10.121 "max_cntlid": 65519, 00:18:10.121 "ana_reporting": false 00:18:10.121 } 00:18:10.121 }, 00:18:10.121 { 00:18:10.121 "method": "nvmf_subsystem_add_host", 00:18:10.121 "params": { 00:18:10.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.121 "host": "nqn.2016-06.io.spdk:host1", 00:18:10.121 "psk": "key0" 00:18:10.121 } 00:18:10.121 }, 00:18:10.121 { 00:18:10.121 "method": "nvmf_subsystem_add_ns", 00:18:10.121 "params": { 00:18:10.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.121 "namespace": { 00:18:10.121 "nsid": 1, 00:18:10.121 "bdev_name": "malloc0", 00:18:10.121 "nguid": "D7FD39D09E2A4FDEBE2E48C4D6EAA8BB", 00:18:10.121 "uuid": "d7fd39d0-9e2a-4fde-be2e-48c4d6eaa8bb", 00:18:10.121 "no_auto_visible": false 00:18:10.121 } 00:18:10.121 } 00:18:10.121 }, 00:18:10.121 { 00:18:10.121 "method": "nvmf_subsystem_add_listener", 00:18:10.121 "params": { 00:18:10.121 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.121 "listen_address": { 00:18:10.121 "trtype": "TCP", 00:18:10.121 "adrfam": "IPv4", 00:18:10.121 "traddr": "10.0.0.3", 00:18:10.121 "trsvcid": "4420" 00:18:10.121 }, 00:18:10.121 "secure_channel": true 00:18:10.121 } 00:18:10.121 } 00:18:10.121 ] 00:18:10.121 } 00:18:10.121 ] 00:18:10.121 }' 00:18:10.121 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.121 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=75150 00:18:10.121 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:10.121 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 75150 00:18:10.121 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75150 ']' 00:18:10.121 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.121 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:10.121 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.121 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:10.121 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.379 [2024-09-28 08:55:48.208844] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:10.379 [2024-09-28 08:55:48.209012] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.637 [2024-09-28 08:55:48.377927] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.637 [2024-09-28 08:55:48.528004] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.637 [2024-09-28 08:55:48.528215] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.637 [2024-09-28 08:55:48.528295] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.637 [2024-09-28 08:55:48.528379] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.638 [2024-09-28 08:55:48.528445] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.638 [2024-09-28 08:55:48.528623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.896 [2024-09-28 08:55:48.801939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:11.156 [2024-09-28 08:55:48.963657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.156 [2024-09-28 08:55:48.995626] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:11.156 [2024-09-28 08:55:48.996057] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:11.156 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:11.156 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:11.156 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:11.156 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:11.156 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.415 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.415 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=75182 00:18:11.415 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 75182 /var/tmp/bdevperf.sock 00:18:11.415 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75182 ']' 00:18:11.415 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.415 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:11.415 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.415 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:11.415 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:11.415 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.415 08:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:11.415 "subsystems": [ 00:18:11.415 { 00:18:11.415 "subsystem": "keyring", 00:18:11.415 "config": [ 00:18:11.415 { 00:18:11.415 "method": "keyring_file_add_key", 00:18:11.415 "params": { 00:18:11.415 "name": "key0", 00:18:11.415 "path": "/tmp/tmp.txqqXu9dzf" 00:18:11.415 } 00:18:11.415 } 00:18:11.415 ] 00:18:11.415 }, 00:18:11.415 { 00:18:11.415 "subsystem": "iobuf", 00:18:11.415 "config": [ 00:18:11.415 { 00:18:11.415 "method": "iobuf_set_options", 00:18:11.415 "params": { 00:18:11.415 "small_pool_count": 8192, 00:18:11.415 "large_pool_count": 1024, 00:18:11.415 "small_bufsize": 8192, 00:18:11.415 "large_bufsize": 135168 00:18:11.415 } 00:18:11.415 } 00:18:11.415 ] 00:18:11.415 }, 00:18:11.415 { 00:18:11.415 "subsystem": "sock", 00:18:11.415 "config": [ 00:18:11.415 { 00:18:11.415 "method": "sock_set_default_impl", 00:18:11.415 "params": { 00:18:11.415 "impl_name": "uring" 00:18:11.415 } 00:18:11.415 }, 00:18:11.415 { 00:18:11.415 "method": "sock_impl_set_options", 00:18:11.415 "params": { 00:18:11.415 "impl_name": "ssl", 00:18:11.415 "recv_buf_size": 4096, 00:18:11.415 "send_buf_size": 4096, 00:18:11.415 "enable_recv_pipe": true, 00:18:11.415 "enable_quickack": false, 00:18:11.415 "enable_placement_id": 0, 00:18:11.415 "enable_zerocopy_send_server": true, 00:18:11.416 "enable_zerocopy_send_client": false, 00:18:11.416 "zerocopy_threshold": 0, 00:18:11.416 "tls_version": 0, 00:18:11.416 "enable_ktls": false 00:18:11.416 } 00:18:11.416 }, 00:18:11.416 { 00:18:11.416 "method": "sock_impl_set_options", 00:18:11.416 "params": { 00:18:11.416 "impl_name": "posix", 00:18:11.416 "recv_buf_size": 2097152, 00:18:11.416 "send_buf_size": 2097152, 00:18:11.416 "enable_recv_pipe": true, 00:18:11.416 "enable_quickack": false, 00:18:11.416 "enable_placement_id": 0, 00:18:11.416 "enable_zerocopy_send_server": true, 00:18:11.416 "enable_zerocopy_send_client": false, 00:18:11.416 "zerocopy_threshold": 0, 00:18:11.416 "tls_version": 0, 00:18:11.416 "enable_ktls": false 00:18:11.416 } 00:18:11.416 }, 00:18:11.416 { 00:18:11.416 "method": "sock_impl_set_options", 00:18:11.416 "params": { 00:18:11.416 "impl_name": "uring", 00:18:11.416 "recv_buf_size": 2097152, 00:18:11.416 "send_buf_size": 2097152, 00:18:11.416 "enable_recv_pipe": true, 00:18:11.416 "enable_quickack": false, 00:18:11.416 "enable_placement_id": 0, 00:18:11.416 "enable_zerocopy_send_server": false, 00:18:11.416 "enable_zerocopy_send_client": false, 00:18:11.416 "zerocopy_threshold": 0, 00:18:11.416 "tls_version": 0, 00:18:11.416 "enable_ktls": false 00:18:11.416 } 00:18:11.416 } 00:18:11.416 ] 00:18:11.416 }, 00:18:11.416 { 00:18:11.416 "subsystem": "vmd", 00:18:11.416 "config": [] 00:18:11.416 }, 00:18:11.416 { 00:18:11.416 "subsystem": "accel", 00:18:11.416 "config": [ 00:18:11.416 { 00:18:11.416 "method": "accel_set_options", 00:18:11.416 "params": { 00:18:11.416 "small_cache_size": 128, 00:18:11.416 "large_cache_size": 16, 00:18:11.416 "task_count": 2048, 00:18:11.416 "sequence_count": 2048, 00:18:11.416 "buf_count": 2048 00:18:11.416 } 00:18:11.416 } 00:18:11.416 ] 00:18:11.416 }, 00:18:11.416 { 00:18:11.416 "subsystem": "bdev", 00:18:11.416 "config": [ 00:18:11.416 { 00:18:11.416 "method": "bdev_set_options", 00:18:11.416 "params": { 00:18:11.416 "bdev_io_pool_size": 65535, 00:18:11.416 "bdev_io_cache_size": 256, 00:18:11.416 "bdev_auto_examine": true, 00:18:11.416 "iobuf_small_cache_size": 128, 00:18:11.416 "iobuf_large_cache_size": 16 00:18:11.416 } 00:18:11.416 }, 00:18:11.416 { 00:18:11.416 "method": "bdev_raid_set_options", 00:18:11.416 "params": { 00:18:11.416 "process_window_size_kb": 1024, 00:18:11.416 "process_max_bandwidth_mb_sec": 0 00:18:11.416 } 00:18:11.416 }, 00:18:11.416 { 00:18:11.416 "method": "bdev_iscsi_set_options", 00:18:11.416 "params": { 00:18:11.416 "timeout_sec": 30 00:18:11.416 } 00:18:11.416 }, 00:18:11.416 { 00:18:11.416 "method": "bdev_nvme_set_options", 00:18:11.416 "params": { 00:18:11.416 "action_on_timeout": "none", 00:18:11.416 "timeout_us": 0, 00:18:11.416 "timeout_admin_us": 0, 00:18:11.416 "keep_alive_timeout_ms": 10000, 00:18:11.416 "arbitration_burst": 0, 00:18:11.416 "low_priority_weight": 0, 00:18:11.416 "medium_priority_weight": 0, 00:18:11.416 "high_priority_weight": 0, 00:18:11.416 "nvme_adminq_poll_period_us": 10000, 00:18:11.416 "nvme_ioq_poll_period_us": 0, 00:18:11.416 "io_queue_requests": 512, 00:18:11.416 "delay_cmd_submit": true, 00:18:11.416 "transport_retry_count": 4, 00:18:11.416 "bdev_retry_count": 3, 00:18:11.416 "transport_ack_timeout": 0, 00:18:11.416 "ctrlr_loss_timeout_sec": 0, 00:18:11.416 "reconnect_delay_sec": 0, 00:18:11.416 "fast_io_fail_timeout_sec": 0, 00:18:11.416 "disable_auto_failback": false, 00:18:11.416 "generate_uuids": false, 00:18:11.416 "transport_tos": 0, 00:18:11.416 "nvme_error_stat": false, 00:18:11.416 "rdma_srq_size": 0, 00:18:11.416 "io_path_stat": false, 00:18:11.416 "allow_accel_sequence": false, 00:18:11.416 "rdma_max_cq_size": 0, 00:18:11.416 "rdma_cm_event_timeout_ms": 0, 00:18:11.416 "dhchap_digests": [ 00:18:11.416 "sha256", 00:18:11.416 "sha384", 00:18:11.416 "sha512" 00:18:11.416 ], 00:18:11.416 "dhchap_dhgroups": [ 00:18:11.416 "null", 00:18:11.416 "ffdhe2048", 00:18:11.416 "ffdhe3072", 00:18:11.416 "ffdhe4096", 00:18:11.416 "ffdhe6144", 00:18:11.416 "ffdhe8192" 00:18:11.416 ] 00:18:11.416 } 00:18:11.416 }, 00:18:11.416 { 00:18:11.416 "method": "bdev_nvme_attach_controller", 00:18:11.416 "params": { 00:18:11.416 "name": "TLSTEST", 00:18:11.416 "trtype": "TCP", 00:18:11.416 "adrfam": "IPv4", 00:18:11.416 "traddr": "10.0.0.3", 00:18:11.416 "trsvcid": "4420", 00:18:11.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.416 "prchk_reftag": false, 00:18:11.416 "prchk_guard": false, 00:18:11.416 "ctrlr_loss_timeout_sec": 0, 00:18:11.416 "reconnect_delay_sec": 0, 00:18:11.416 "fast_io_fail_timeout_sec": 0, 00:18:11.416 "psk": "key0", 00:18:11.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.416 "hdgst": false, 00:18:11.416 "ddgst": false 00:18:11.416 } 00:18:11.416 }, 00:18:11.416 { 00:18:11.416 "method": "bdev_nvme_set_hotplug", 00:18:11.416 "params": { 00:18:11.416 "period_us": 100000, 00:18:11.416 "enable": false 00:18:11.416 } 00:18:11.416 }, 00:18:11.416 { 00:18:11.416 "method": "bdev_wait_for_examine" 00:18:11.416 } 00:18:11.416 ] 00:18:11.416 }, 00:18:11.416 { 00:18:11.416 "subsystem": "nbd", 00:18:11.416 "config": [] 00:18:11.416 } 00:18:11.416 ] 00:18:11.416 }' 00:18:11.416 [2024-09-28 08:55:49.299700] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:11.416 [2024-09-28 08:55:49.299901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75182 ] 00:18:11.675 [2024-09-28 08:55:49.473030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.933 [2024-09-28 08:55:49.709003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.192 [2024-09-28 08:55:49.945680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:12.192 [2024-09-28 08:55:50.053447] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.450 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:12.450 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:12.450 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:12.450 Running I/O for 10 seconds... 00:18:22.724 3257.00 IOPS, 12.72 MiB/s 3291.00 IOPS, 12.86 MiB/s 3301.33 IOPS, 12.90 MiB/s 3307.25 IOPS, 12.92 MiB/s 3288.00 IOPS, 12.84 MiB/s 3297.83 IOPS, 12.88 MiB/s 3296.14 IOPS, 12.88 MiB/s 3298.88 IOPS, 12.89 MiB/s 3302.00 IOPS, 12.90 MiB/s 3299.80 IOPS, 12.89 MiB/s 00:18:22.724 Latency(us) 00:18:22.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.724 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:22.724 Verification LBA range: start 0x0 length 0x2000 00:18:22.724 TLSTESTn1 : 10.02 3305.34 12.91 0.00 0.00 38657.82 7685.59 40036.54 00:18:22.724 =================================================================================================================== 00:18:22.724 Total : 3305.34 12.91 0.00 0.00 38657.82 7685.59 40036.54 00:18:22.724 { 00:18:22.724 "results": [ 00:18:22.724 { 00:18:22.724 "job": "TLSTESTn1", 00:18:22.724 "core_mask": "0x4", 00:18:22.724 "workload": "verify", 00:18:22.724 "status": "finished", 00:18:22.724 "verify_range": { 00:18:22.724 "start": 0, 00:18:22.724 "length": 8192 00:18:22.724 }, 00:18:22.724 "queue_depth": 128, 00:18:22.724 "io_size": 4096, 00:18:22.724 "runtime": 10.021045, 00:18:22.724 "iops": 3305.3439037545486, 00:18:22.724 "mibps": 12.911499624041205, 00:18:22.724 "io_failed": 0, 00:18:22.724 "io_timeout": 0, 00:18:22.724 "avg_latency_us": 38657.819408321055, 00:18:22.724 "min_latency_us": 7685.585454545455, 00:18:22.724 "max_latency_us": 40036.538181818185 00:18:22.724 } 00:18:22.724 ], 00:18:22.724 "core_count": 1 00:18:22.724 } 00:18:22.724 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:22.724 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 75182 00:18:22.724 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75182 ']' 00:18:22.724 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75182 00:18:22.724 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:22.724 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:22.724 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75182 00:18:22.724 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:22.724 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:22.724 killing process with pid 75182 00:18:22.724 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75182' 00:18:22.724 Received shutdown signal, test time was about 10.000000 seconds 00:18:22.724 00:18:22.724 Latency(us) 00:18:22.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.724 =================================================================================================================== 00:18:22.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:22.724 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75182 00:18:22.724 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75182 00:18:23.660 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 75150 00:18:23.660 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75150 ']' 00:18:23.660 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75150 00:18:23.660 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:23.660 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:23.660 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75150 00:18:23.660 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:23.660 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:23.660 killing process with pid 75150 00:18:23.660 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75150' 00:18:23.660 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75150 00:18:23.660 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75150 00:18:24.599 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:24.599 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:24.599 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:24.599 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.599 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=75335 00:18:24.599 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:24.599 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 75335 00:18:24.599 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75335 ']' 00:18:24.599 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.599 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:24.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.599 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.599 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:24.599 08:56:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.858 [2024-09-28 08:56:02.648288] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:24.858 [2024-09-28 08:56:02.648452] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.858 [2024-09-28 08:56:02.809422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.118 [2024-09-28 08:56:02.965450] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.118 [2024-09-28 08:56:02.965505] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.118 [2024-09-28 08:56:02.965524] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.118 [2024-09-28 08:56:02.965539] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.118 [2024-09-28 08:56:02.965551] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.118 [2024-09-28 08:56:02.965585] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.377 [2024-09-28 08:56:03.127636] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:25.637 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:25.637 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:25.637 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:25.637 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:25.637 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.637 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.637 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.txqqXu9dzf 00:18:25.637 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.txqqXu9dzf 00:18:25.637 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:25.897 [2024-09-28 08:56:03.827985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.897 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:26.156 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:18:26.415 [2024-09-28 08:56:04.344178] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:26.415 [2024-09-28 08:56:04.344530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:26.415 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:26.674 malloc0 00:18:26.674 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:26.933 08:56:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.txqqXu9dzf 00:18:27.192 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.451 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=75395 00:18:27.451 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.451 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 75395 /var/tmp/bdevperf.sock 00:18:27.451 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75395 ']' 00:18:27.451 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:27.451 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.451 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:27.451 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.451 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:27.451 08:56:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.451 [2024-09-28 08:56:05.424629] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:27.451 [2024-09-28 08:56:05.424813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75395 ] 00:18:27.710 [2024-09-28 08:56:05.576873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.970 [2024-09-28 08:56:05.733742] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.970 [2024-09-28 08:56:05.898729] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:28.537 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:28.537 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:28.537 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.txqqXu9dzf 00:18:28.796 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:29.055 [2024-09-28 08:56:06.795965] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.055 nvme0n1 00:18:29.055 08:56:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:29.055 Running I/O for 1 seconds... 00:18:30.433 3201.00 IOPS, 12.50 MiB/s 00:18:30.433 Latency(us) 00:18:30.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.433 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.433 Verification LBA range: start 0x0 length 0x2000 00:18:30.433 nvme0n1 : 1.02 3258.96 12.73 0.00 0.00 38859.40 7268.54 45994.36 00:18:30.433 =================================================================================================================== 00:18:30.433 Total : 3258.96 12.73 0.00 0.00 38859.40 7268.54 45994.36 00:18:30.433 { 00:18:30.433 "results": [ 00:18:30.433 { 00:18:30.433 "job": "nvme0n1", 00:18:30.433 "core_mask": "0x2", 00:18:30.433 "workload": "verify", 00:18:30.433 "status": "finished", 00:18:30.433 "verify_range": { 00:18:30.433 "start": 0, 00:18:30.433 "length": 8192 00:18:30.433 }, 00:18:30.433 "queue_depth": 128, 00:18:30.433 "io_size": 4096, 00:18:30.433 "runtime": 1.021798, 00:18:30.433 "iops": 3258.961164535456, 00:18:30.433 "mibps": 12.730317048966626, 00:18:30.433 "io_failed": 0, 00:18:30.433 "io_timeout": 0, 00:18:30.433 "avg_latency_us": 38859.403915915915, 00:18:30.433 "min_latency_us": 7268.538181818182, 00:18:30.433 "max_latency_us": 45994.35636363636 00:18:30.433 } 00:18:30.433 ], 00:18:30.433 "core_count": 1 00:18:30.433 } 00:18:30.433 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 75395 00:18:30.433 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75395 ']' 00:18:30.433 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75395 00:18:30.433 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:30.433 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:30.433 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75395 00:18:30.433 killing process with pid 75395 00:18:30.433 Received shutdown signal, test time was about 1.000000 seconds 00:18:30.433 00:18:30.433 Latency(us) 00:18:30.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.433 =================================================================================================================== 00:18:30.433 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.433 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:30.433 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:30.433 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75395' 00:18:30.433 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75395 00:18:30.433 08:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75395 00:18:31.397 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 75335 00:18:31.397 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75335 ']' 00:18:31.397 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75335 00:18:31.397 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:31.397 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:31.397 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75335 00:18:31.397 killing process with pid 75335 00:18:31.397 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:31.397 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:31.397 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75335' 00:18:31.397 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75335 00:18:31.397 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75335 00:18:32.333 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:32.333 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:32.333 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:32.333 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.333 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=75467 00:18:32.333 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:32.333 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 75467 00:18:32.333 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75467 ']' 00:18:32.333 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.333 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:32.333 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.333 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:32.333 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.593 [2024-09-28 08:56:10.337679] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:32.593 [2024-09-28 08:56:10.337884] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.593 [2024-09-28 08:56:10.510906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.852 [2024-09-28 08:56:10.671977] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.852 [2024-09-28 08:56:10.672058] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.852 [2024-09-28 08:56:10.672077] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.852 [2024-09-28 08:56:10.672092] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.852 [2024-09-28 08:56:10.672104] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.852 [2024-09-28 08:56:10.672139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.852 [2024-09-28 08:56:10.823215] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.420 [2024-09-28 08:56:11.261038] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.420 malloc0 00:18:33.420 [2024-09-28 08:56:11.326044] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:33.420 [2024-09-28 08:56:11.326387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=75499 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 75499 /var/tmp/bdevperf.sock 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75499 ']' 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:33.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:33.420 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.679 [2024-09-28 08:56:11.469885] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:33.679 [2024-09-28 08:56:11.470084] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75499 ] 00:18:33.679 [2024-09-28 08:56:11.644578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.938 [2024-09-28 08:56:11.858669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.196 [2024-09-28 08:56:12.019231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:34.455 08:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:34.455 08:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:34.455 08:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.txqqXu9dzf 00:18:34.713 08:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:34.972 [2024-09-28 08:56:12.855291] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.972 nvme0n1 00:18:34.972 08:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:35.231 Running I/O for 1 seconds... 00:18:36.167 3095.00 IOPS, 12.09 MiB/s 00:18:36.167 Latency(us) 00:18:36.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.167 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:36.167 Verification LBA range: start 0x0 length 0x2000 00:18:36.167 nvme0n1 : 1.02 3147.82 12.30 0.00 0.00 40067.61 3068.28 24903.68 00:18:36.168 =================================================================================================================== 00:18:36.168 Total : 3147.82 12.30 0.00 0.00 40067.61 3068.28 24903.68 00:18:36.168 { 00:18:36.168 "results": [ 00:18:36.168 { 00:18:36.168 "job": "nvme0n1", 00:18:36.168 "core_mask": "0x2", 00:18:36.168 "workload": "verify", 00:18:36.168 "status": "finished", 00:18:36.168 "verify_range": { 00:18:36.168 "start": 0, 00:18:36.168 "length": 8192 00:18:36.168 }, 00:18:36.168 "queue_depth": 128, 00:18:36.168 "io_size": 4096, 00:18:36.168 "runtime": 1.023884, 00:18:36.168 "iops": 3147.8175262041404, 00:18:36.168 "mibps": 12.296162211734924, 00:18:36.168 "io_failed": 0, 00:18:36.168 "io_timeout": 0, 00:18:36.168 "avg_latency_us": 40067.60507037486, 00:18:36.168 "min_latency_us": 3068.276363636364, 00:18:36.168 "max_latency_us": 24903.68 00:18:36.168 } 00:18:36.168 ], 00:18:36.168 "core_count": 1 00:18:36.168 } 00:18:36.168 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:36.168 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.168 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.426 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.426 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:36.426 "subsystems": [ 00:18:36.426 { 00:18:36.426 "subsystem": "keyring", 00:18:36.426 "config": [ 00:18:36.426 { 00:18:36.426 "method": "keyring_file_add_key", 00:18:36.426 "params": { 00:18:36.426 "name": "key0", 00:18:36.426 "path": "/tmp/tmp.txqqXu9dzf" 00:18:36.426 } 00:18:36.426 } 00:18:36.426 ] 00:18:36.426 }, 00:18:36.426 { 00:18:36.426 "subsystem": "iobuf", 00:18:36.426 "config": [ 00:18:36.426 { 00:18:36.426 "method": "iobuf_set_options", 00:18:36.426 "params": { 00:18:36.426 "small_pool_count": 8192, 00:18:36.426 "large_pool_count": 1024, 00:18:36.426 "small_bufsize": 8192, 00:18:36.426 "large_bufsize": 135168 00:18:36.426 } 00:18:36.426 } 00:18:36.426 ] 00:18:36.426 }, 00:18:36.426 { 00:18:36.426 "subsystem": "sock", 00:18:36.426 "config": [ 00:18:36.426 { 00:18:36.426 "method": "sock_set_default_impl", 00:18:36.426 "params": { 00:18:36.426 "impl_name": "uring" 00:18:36.426 } 00:18:36.426 }, 00:18:36.426 { 00:18:36.426 "method": "sock_impl_set_options", 00:18:36.426 "params": { 00:18:36.426 "impl_name": "ssl", 00:18:36.426 "recv_buf_size": 4096, 00:18:36.426 "send_buf_size": 4096, 00:18:36.426 "enable_recv_pipe": true, 00:18:36.426 "enable_quickack": false, 00:18:36.426 "enable_placement_id": 0, 00:18:36.426 "enable_zerocopy_send_server": true, 00:18:36.426 "enable_zerocopy_send_client": false, 00:18:36.426 "zerocopy_threshold": 0, 00:18:36.426 "tls_version": 0, 00:18:36.426 "enable_ktls": false 00:18:36.426 } 00:18:36.426 }, 00:18:36.426 { 00:18:36.426 "method": "sock_impl_set_options", 00:18:36.426 "params": { 00:18:36.426 "impl_name": "posix", 00:18:36.426 "recv_buf_size": 2097152, 00:18:36.426 "send_buf_size": 2097152, 00:18:36.426 "enable_recv_pipe": true, 00:18:36.426 "enable_quickack": false, 00:18:36.426 "enable_placement_id": 0, 00:18:36.426 "enable_zerocopy_send_server": true, 00:18:36.426 "enable_zerocopy_send_client": false, 00:18:36.426 "zerocopy_threshold": 0, 00:18:36.426 "tls_version": 0, 00:18:36.426 "enable_ktls": false 00:18:36.426 } 00:18:36.426 }, 00:18:36.426 { 00:18:36.426 "method": "sock_impl_set_options", 00:18:36.426 "params": { 00:18:36.426 "impl_name": "uring", 00:18:36.426 "recv_buf_size": 2097152, 00:18:36.426 "send_buf_size": 2097152, 00:18:36.426 "enable_recv_pipe": true, 00:18:36.426 "enable_quickack": false, 00:18:36.426 "enable_placement_id": 0, 00:18:36.426 "enable_zerocopy_send_server": false, 00:18:36.426 "enable_zerocopy_send_client": false, 00:18:36.426 "zerocopy_threshold": 0, 00:18:36.426 "tls_version": 0, 00:18:36.426 "enable_ktls": false 00:18:36.426 } 00:18:36.426 } 00:18:36.426 ] 00:18:36.426 }, 00:18:36.426 { 00:18:36.426 "subsystem": "vmd", 00:18:36.426 "config": [] 00:18:36.426 }, 00:18:36.426 { 00:18:36.426 "subsystem": "accel", 00:18:36.426 "config": [ 00:18:36.426 { 00:18:36.426 "method": "accel_set_options", 00:18:36.426 "params": { 00:18:36.426 "small_cache_size": 128, 00:18:36.426 "large_cache_size": 16, 00:18:36.426 "task_count": 2048, 00:18:36.426 "sequence_count": 2048, 00:18:36.426 "buf_count": 2048 00:18:36.426 } 00:18:36.426 } 00:18:36.426 ] 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "subsystem": "bdev", 00:18:36.427 "config": [ 00:18:36.427 { 00:18:36.427 "method": "bdev_set_options", 00:18:36.427 "params": { 00:18:36.427 "bdev_io_pool_size": 65535, 00:18:36.427 "bdev_io_cache_size": 256, 00:18:36.427 "bdev_auto_examine": true, 00:18:36.427 "iobuf_small_cache_size": 128, 00:18:36.427 "iobuf_large_cache_size": 16 00:18:36.427 } 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "method": "bdev_raid_set_options", 00:18:36.427 "params": { 00:18:36.427 "process_window_size_kb": 1024, 00:18:36.427 "process_max_bandwidth_mb_sec": 0 00:18:36.427 } 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "method": "bdev_iscsi_set_options", 00:18:36.427 "params": { 00:18:36.427 "timeout_sec": 30 00:18:36.427 } 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "method": "bdev_nvme_set_options", 00:18:36.427 "params": { 00:18:36.427 "action_on_timeout": "none", 00:18:36.427 "timeout_us": 0, 00:18:36.427 "timeout_admin_us": 0, 00:18:36.427 "keep_alive_timeout_ms": 10000, 00:18:36.427 "arbitration_burst": 0, 00:18:36.427 "low_priority_weight": 0, 00:18:36.427 "medium_priority_weight": 0, 00:18:36.427 "high_priority_weight": 0, 00:18:36.427 "nvme_adminq_poll_period_us": 10000, 00:18:36.427 "nvme_ioq_poll_period_us": 0, 00:18:36.427 "io_queue_requests": 0, 00:18:36.427 "delay_cmd_submit": true, 00:18:36.427 "transport_retry_count": 4, 00:18:36.427 "bdev_retry_count": 3, 00:18:36.427 "transport_ack_timeout": 0, 00:18:36.427 "ctrlr_loss_timeout_sec": 0, 00:18:36.427 "reconnect_delay_sec": 0, 00:18:36.427 "fast_io_fail_timeout_sec": 0, 00:18:36.427 "disable_auto_failback": false, 00:18:36.427 "generate_uuids": false, 00:18:36.427 "transport_tos": 0, 00:18:36.427 "nvme_error_stat": false, 00:18:36.427 "rdma_srq_size": 0, 00:18:36.427 "io_path_stat": false, 00:18:36.427 "allow_accel_sequence": false, 00:18:36.427 "rdma_max_cq_size": 0, 00:18:36.427 "rdma_cm_event_timeout_ms": 0, 00:18:36.427 "dhchap_digests": [ 00:18:36.427 "sha256", 00:18:36.427 "sha384", 00:18:36.427 "sha512" 00:18:36.427 ], 00:18:36.427 "dhchap_dhgroups": [ 00:18:36.427 "null", 00:18:36.427 "ffdhe2048", 00:18:36.427 "ffdhe3072", 00:18:36.427 "ffdhe4096", 00:18:36.427 "ffdhe6144", 00:18:36.427 "ffdhe8192" 00:18:36.427 ] 00:18:36.427 } 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "method": "bdev_nvme_set_hotplug", 00:18:36.427 "params": { 00:18:36.427 "period_us": 100000, 00:18:36.427 "enable": false 00:18:36.427 } 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "method": "bdev_malloc_create", 00:18:36.427 "params": { 00:18:36.427 "name": "malloc0", 00:18:36.427 "num_blocks": 8192, 00:18:36.427 "block_size": 4096, 00:18:36.427 "physical_block_size": 4096, 00:18:36.427 "uuid": "2319e8a6-63bd-4609-a594-f2c0feae0955", 00:18:36.427 "optimal_io_boundary": 0, 00:18:36.427 "md_size": 0, 00:18:36.427 "dif_type": 0, 00:18:36.427 "dif_is_head_of_md": false, 00:18:36.427 "dif_pi_format": 0 00:18:36.427 } 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "method": "bdev_wait_for_examine" 00:18:36.427 } 00:18:36.427 ] 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "subsystem": "nbd", 00:18:36.427 "config": [] 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "subsystem": "scheduler", 00:18:36.427 "config": [ 00:18:36.427 { 00:18:36.427 "method": "framework_set_scheduler", 00:18:36.427 "params": { 00:18:36.427 "name": "static" 00:18:36.427 } 00:18:36.427 } 00:18:36.427 ] 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "subsystem": "nvmf", 00:18:36.427 "config": [ 00:18:36.427 { 00:18:36.427 "method": "nvmf_set_config", 00:18:36.427 "params": { 00:18:36.427 "discovery_filter": "match_any", 00:18:36.427 "admin_cmd_passthru": { 00:18:36.427 "identify_ctrlr": false 00:18:36.427 }, 00:18:36.427 "dhchap_digests": [ 00:18:36.427 "sha256", 00:18:36.427 "sha384", 00:18:36.427 "sha512" 00:18:36.427 ], 00:18:36.427 "dhchap_dhgroups": [ 00:18:36.427 "null", 00:18:36.427 "ffdhe2048", 00:18:36.427 "ffdhe3072", 00:18:36.427 "ffdhe4096", 00:18:36.427 "ffdhe6144", 00:18:36.427 "ffdhe8192" 00:18:36.427 ] 00:18:36.427 } 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "method": "nvmf_set_max_subsystems", 00:18:36.427 "params": { 00:18:36.427 "max_subsystems": 1024 00:18:36.427 } 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "method": "nvmf_set_crdt", 00:18:36.427 "params": { 00:18:36.427 "crdt1": 0, 00:18:36.427 "crdt2": 0, 00:18:36.427 "crdt3": 0 00:18:36.427 } 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "method": "nvmf_create_transport", 00:18:36.427 "params": { 00:18:36.427 "trtype": "TCP", 00:18:36.427 "max_queue_depth": 128, 00:18:36.427 "max_io_qpairs_per_ctrlr": 127, 00:18:36.427 "in_capsule_data_size": 4096, 00:18:36.427 "max_io_size": 131072, 00:18:36.427 "io_unit_size": 131072, 00:18:36.427 "max_aq_depth": 128, 00:18:36.427 "num_shared_buffers": 511, 00:18:36.427 "buf_cache_size": 4294967295, 00:18:36.427 "dif_insert_or_strip": false, 00:18:36.427 "zcopy": false, 00:18:36.427 "c2h_success": false, 00:18:36.427 "sock_priority": 0, 00:18:36.427 "abort_timeout_sec": 1, 00:18:36.427 "ack_timeout": 0, 00:18:36.427 "data_wr_pool_size": 0 00:18:36.427 } 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "method": "nvmf_create_subsystem", 00:18:36.427 "params": { 00:18:36.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.427 "allow_any_host": false, 00:18:36.427 "serial_number": "00000000000000000000", 00:18:36.427 "model_number": "SPDK bdev Controller", 00:18:36.427 "max_namespaces": 32, 00:18:36.427 "min_cntlid": 1, 00:18:36.427 "max_cntlid": 65519, 00:18:36.427 "ana_reporting": false 00:18:36.427 } 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "method": "nvmf_subsystem_add_host", 00:18:36.427 "params": { 00:18:36.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.427 "host": "nqn.2016-06.io.spdk:host1", 00:18:36.427 "psk": "key0" 00:18:36.427 } 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "method": "nvmf_subsystem_add_ns", 00:18:36.427 "params": { 00:18:36.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.427 "namespace": { 00:18:36.427 "nsid": 1, 00:18:36.427 "bdev_name": "malloc0", 00:18:36.427 "nguid": "2319E8A663BD4609A594F2C0FEAE0955", 00:18:36.427 "uuid": "2319e8a6-63bd-4609-a594-f2c0feae0955", 00:18:36.427 "no_auto_visible": false 00:18:36.427 } 00:18:36.427 } 00:18:36.427 }, 00:18:36.427 { 00:18:36.427 "method": "nvmf_subsystem_add_listener", 00:18:36.427 "params": { 00:18:36.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.427 "listen_address": { 00:18:36.427 "trtype": "TCP", 00:18:36.427 "adrfam": "IPv4", 00:18:36.427 "traddr": "10.0.0.3", 00:18:36.427 "trsvcid": "4420" 00:18:36.427 }, 00:18:36.427 "secure_channel": false, 00:18:36.427 "sock_impl": "ssl" 00:18:36.427 } 00:18:36.427 } 00:18:36.427 ] 00:18:36.427 } 00:18:36.427 ] 00:18:36.427 }' 00:18:36.428 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:36.687 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:36.687 "subsystems": [ 00:18:36.687 { 00:18:36.687 "subsystem": "keyring", 00:18:36.687 "config": [ 00:18:36.687 { 00:18:36.687 "method": "keyring_file_add_key", 00:18:36.687 "params": { 00:18:36.687 "name": "key0", 00:18:36.687 "path": "/tmp/tmp.txqqXu9dzf" 00:18:36.687 } 00:18:36.687 } 00:18:36.687 ] 00:18:36.687 }, 00:18:36.687 { 00:18:36.687 "subsystem": "iobuf", 00:18:36.687 "config": [ 00:18:36.687 { 00:18:36.687 "method": "iobuf_set_options", 00:18:36.687 "params": { 00:18:36.687 "small_pool_count": 8192, 00:18:36.687 "large_pool_count": 1024, 00:18:36.687 "small_bufsize": 8192, 00:18:36.687 "large_bufsize": 135168 00:18:36.687 } 00:18:36.687 } 00:18:36.687 ] 00:18:36.687 }, 00:18:36.687 { 00:18:36.687 "subsystem": "sock", 00:18:36.687 "config": [ 00:18:36.687 { 00:18:36.687 "method": "sock_set_default_impl", 00:18:36.687 "params": { 00:18:36.687 "impl_name": "uring" 00:18:36.687 } 00:18:36.687 }, 00:18:36.687 { 00:18:36.687 "method": "sock_impl_set_options", 00:18:36.687 "params": { 00:18:36.687 "impl_name": "ssl", 00:18:36.687 "recv_buf_size": 4096, 00:18:36.687 "send_buf_size": 4096, 00:18:36.687 "enable_recv_pipe": true, 00:18:36.687 "enable_quickack": false, 00:18:36.687 "enable_placement_id": 0, 00:18:36.687 "enable_zerocopy_send_server": true, 00:18:36.687 "enable_zerocopy_send_client": false, 00:18:36.687 "zerocopy_threshold": 0, 00:18:36.687 "tls_version": 0, 00:18:36.687 "enable_ktls": false 00:18:36.687 } 00:18:36.687 }, 00:18:36.687 { 00:18:36.687 "method": "sock_impl_set_options", 00:18:36.687 "params": { 00:18:36.687 "impl_name": "posix", 00:18:36.687 "recv_buf_size": 2097152, 00:18:36.687 "send_buf_size": 2097152, 00:18:36.687 "enable_recv_pipe": true, 00:18:36.687 "enable_quickack": false, 00:18:36.687 "enable_placement_id": 0, 00:18:36.687 "enable_zerocopy_send_server": true, 00:18:36.687 "enable_zerocopy_send_client": false, 00:18:36.687 "zerocopy_threshold": 0, 00:18:36.687 "tls_version": 0, 00:18:36.687 "enable_ktls": false 00:18:36.687 } 00:18:36.687 }, 00:18:36.687 { 00:18:36.687 "method": "sock_impl_set_options", 00:18:36.687 "params": { 00:18:36.687 "impl_name": "uring", 00:18:36.687 "recv_buf_size": 2097152, 00:18:36.687 "send_buf_size": 2097152, 00:18:36.687 "enable_recv_pipe": true, 00:18:36.687 "enable_quickack": false, 00:18:36.687 "enable_placement_id": 0, 00:18:36.687 "enable_zerocopy_send_server": false, 00:18:36.687 "enable_zerocopy_send_client": false, 00:18:36.687 "zerocopy_threshold": 0, 00:18:36.687 "tls_version": 0, 00:18:36.687 "enable_ktls": false 00:18:36.687 } 00:18:36.687 } 00:18:36.687 ] 00:18:36.687 }, 00:18:36.687 { 00:18:36.687 "subsystem": "vmd", 00:18:36.687 "config": [] 00:18:36.687 }, 00:18:36.687 { 00:18:36.687 "subsystem": "accel", 00:18:36.688 "config": [ 00:18:36.688 { 00:18:36.688 "method": "accel_set_options", 00:18:36.688 "params": { 00:18:36.688 "small_cache_size": 128, 00:18:36.688 "large_cache_size": 16, 00:18:36.688 "task_count": 2048, 00:18:36.688 "sequence_count": 2048, 00:18:36.688 "buf_count": 2048 00:18:36.688 } 00:18:36.688 } 00:18:36.688 ] 00:18:36.688 }, 00:18:36.688 { 00:18:36.688 "subsystem": "bdev", 00:18:36.688 "config": [ 00:18:36.688 { 00:18:36.688 "method": "bdev_set_options", 00:18:36.688 "params": { 00:18:36.688 "bdev_io_pool_size": 65535, 00:18:36.688 "bdev_io_cache_size": 256, 00:18:36.688 "bdev_auto_examine": true, 00:18:36.688 "iobuf_small_cache_size": 128, 00:18:36.688 "iobuf_large_cache_size": 16 00:18:36.688 } 00:18:36.688 }, 00:18:36.688 { 00:18:36.688 "method": "bdev_raid_set_options", 00:18:36.688 "params": { 00:18:36.688 "process_window_size_kb": 1024, 00:18:36.688 "process_max_bandwidth_mb_sec": 0 00:18:36.688 } 00:18:36.688 }, 00:18:36.688 { 00:18:36.688 "method": "bdev_iscsi_set_options", 00:18:36.688 "params": { 00:18:36.688 "timeout_sec": 30 00:18:36.688 } 00:18:36.688 }, 00:18:36.688 { 00:18:36.688 "method": "bdev_nvme_set_options", 00:18:36.688 "params": { 00:18:36.688 "action_on_timeout": "none", 00:18:36.688 "timeout_us": 0, 00:18:36.688 "timeout_admin_us": 0, 00:18:36.688 "keep_alive_timeout_ms": 10000, 00:18:36.688 "arbitration_burst": 0, 00:18:36.688 "low_priority_weight": 0, 00:18:36.688 "medium_priority_weight": 0, 00:18:36.688 "high_priority_weight": 0, 00:18:36.688 "nvme_adminq_poll_period_us": 10000, 00:18:36.688 "nvme_ioq_poll_period_us": 0, 00:18:36.688 "io_queue_requests": 512, 00:18:36.688 "delay_cmd_submit": true, 00:18:36.688 "transport_retry_count": 4, 00:18:36.688 "bdev_retry_count": 3, 00:18:36.688 "transport_ack_timeout": 0, 00:18:36.688 "ctrlr_loss_timeout_sec": 0, 00:18:36.688 "reconnect_delay_sec": 0, 00:18:36.688 "fast_io_fail_timeout_sec": 0, 00:18:36.688 "disable_auto_failback": false, 00:18:36.688 "generate_uuids": false, 00:18:36.688 "transport_tos": 0, 00:18:36.688 "nvme_error_stat": false, 00:18:36.688 "rdma_srq_size": 0, 00:18:36.688 "io_path_stat": false, 00:18:36.688 "allow_accel_sequence": false, 00:18:36.688 "rdma_max_cq_size": 0, 00:18:36.688 "rdma_cm_event_timeout_ms": 0, 00:18:36.688 "dhchap_digests": [ 00:18:36.688 "sha256", 00:18:36.688 "sha384", 00:18:36.688 "sha512" 00:18:36.688 ], 00:18:36.688 "dhchap_dhgroups": [ 00:18:36.688 "null", 00:18:36.688 "ffdhe2048", 00:18:36.688 "ffdhe3072", 00:18:36.688 "ffdhe4096", 00:18:36.688 "ffdhe6144", 00:18:36.688 "ffdhe8192" 00:18:36.688 ] 00:18:36.688 } 00:18:36.688 }, 00:18:36.688 { 00:18:36.688 "method": "bdev_nvme_attach_controller", 00:18:36.688 "params": { 00:18:36.688 "name": "nvme0", 00:18:36.688 "trtype": "TCP", 00:18:36.688 "adrfam": "IPv4", 00:18:36.688 "traddr": "10.0.0.3", 00:18:36.688 "trsvcid": "4420", 00:18:36.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.688 "prchk_reftag": false, 00:18:36.688 "prchk_guard": false, 00:18:36.688 "ctrlr_loss_timeout_sec": 0, 00:18:36.688 "reconnect_delay_sec": 0, 00:18:36.688 "fast_io_fail_timeout_sec": 0, 00:18:36.688 "psk": "key0", 00:18:36.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.688 "hdgst": false, 00:18:36.688 "ddgst": false 00:18:36.688 } 00:18:36.688 }, 00:18:36.688 { 00:18:36.688 "method": "bdev_nvme_set_hotplug", 00:18:36.688 "params": { 00:18:36.688 "period_us": 100000, 00:18:36.688 "enable": false 00:18:36.688 } 00:18:36.688 }, 00:18:36.688 { 00:18:36.688 "method": "bdev_enable_histogram", 00:18:36.688 "params": { 00:18:36.688 "name": "nvme0n1", 00:18:36.688 "enable": true 00:18:36.688 } 00:18:36.688 }, 00:18:36.688 { 00:18:36.688 "method": "bdev_wait_for_examine" 00:18:36.688 } 00:18:36.688 ] 00:18:36.688 }, 00:18:36.688 { 00:18:36.688 "subsystem": "nbd", 00:18:36.688 "config": [] 00:18:36.688 } 00:18:36.688 ] 00:18:36.688 }' 00:18:36.688 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 75499 00:18:36.688 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75499 ']' 00:18:36.688 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75499 00:18:36.688 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:36.688 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:36.688 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75499 00:18:36.688 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:36.688 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:36.688 killing process with pid 75499 00:18:36.688 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75499' 00:18:36.688 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75499 00:18:36.688 Received shutdown signal, test time was about 1.000000 seconds 00:18:36.688 00:18:36.688 Latency(us) 00:18:36.688 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.688 =================================================================================================================== 00:18:36.688 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.688 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75499 00:18:38.067 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 75467 00:18:38.067 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75467 ']' 00:18:38.067 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75467 00:18:38.067 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:38.067 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.067 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75467 00:18:38.067 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:38.067 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:38.067 killing process with pid 75467 00:18:38.067 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75467' 00:18:38.067 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75467 00:18:38.067 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75467 00:18:39.008 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:39.008 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:39.008 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.008 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:39.008 "subsystems": [ 00:18:39.008 { 00:18:39.008 "subsystem": "keyring", 00:18:39.008 "config": [ 00:18:39.008 { 00:18:39.008 "method": "keyring_file_add_key", 00:18:39.008 "params": { 00:18:39.008 "name": "key0", 00:18:39.008 "path": "/tmp/tmp.txqqXu9dzf" 00:18:39.008 } 00:18:39.008 } 00:18:39.008 ] 00:18:39.008 }, 00:18:39.008 { 00:18:39.008 "subsystem": "iobuf", 00:18:39.008 "config": [ 00:18:39.008 { 00:18:39.008 "method": "iobuf_set_options", 00:18:39.008 "params": { 00:18:39.008 "small_pool_count": 8192, 00:18:39.008 "large_pool_count": 1024, 00:18:39.008 "small_bufsize": 8192, 00:18:39.008 "large_bufsize": 135168 00:18:39.008 } 00:18:39.008 } 00:18:39.008 ] 00:18:39.008 }, 00:18:39.008 { 00:18:39.008 "subsystem": "sock", 00:18:39.008 "config": [ 00:18:39.008 { 00:18:39.008 "method": "sock_set_default_impl", 00:18:39.008 "params": { 00:18:39.008 "impl_name": "uring" 00:18:39.008 } 00:18:39.008 }, 00:18:39.008 { 00:18:39.008 "method": "sock_impl_set_options", 00:18:39.008 "params": { 00:18:39.008 "impl_name": "ssl", 00:18:39.008 "recv_buf_size": 4096, 00:18:39.008 "send_buf_size": 4096, 00:18:39.008 "enable_recv_pipe": true, 00:18:39.008 "enable_quickack": false, 00:18:39.008 "enable_placement_id": 0, 00:18:39.008 "enable_zerocopy_send_server": true, 00:18:39.008 "enable_zerocopy_send_client": false, 00:18:39.008 "zerocopy_threshold": 0, 00:18:39.008 "tls_version": 0, 00:18:39.008 "enable_ktls": false 00:18:39.008 } 00:18:39.008 }, 00:18:39.008 { 00:18:39.008 "method": "sock_impl_set_options", 00:18:39.008 "params": { 00:18:39.008 "impl_name": "posix", 00:18:39.008 "recv_buf_size": 2097152, 00:18:39.008 "send_buf_size": 2097152, 00:18:39.008 "enable_recv_pipe": true, 00:18:39.008 "enable_quickack": false, 00:18:39.008 "enable_placement_id": 0, 00:18:39.008 "enable_zerocopy_send_server": true, 00:18:39.008 "enable_zerocopy_send_client": false, 00:18:39.008 "zerocopy_threshold": 0, 00:18:39.008 "tls_version": 0, 00:18:39.008 "enable_ktls": false 00:18:39.008 } 00:18:39.008 }, 00:18:39.008 { 00:18:39.008 "method": "sock_impl_set_options", 00:18:39.008 "params": { 00:18:39.008 "impl_name": "uring", 00:18:39.008 "recv_buf_size": 2097152, 00:18:39.008 "send_buf_size": 2097152, 00:18:39.008 "enable_recv_pipe": true, 00:18:39.008 "enable_quickack": false, 00:18:39.008 "enable_placement_id": 0, 00:18:39.008 "enable_zerocopy_send_server": false, 00:18:39.008 "enable_zerocopy_send_client": false, 00:18:39.008 "zerocopy_threshold": 0, 00:18:39.008 "tls_version": 0, 00:18:39.008 "enable_ktls": false 00:18:39.008 } 00:18:39.008 } 00:18:39.008 ] 00:18:39.008 }, 00:18:39.008 { 00:18:39.008 "subsystem": "vmd", 00:18:39.008 "config": [] 00:18:39.008 }, 00:18:39.008 { 00:18:39.009 "subsystem": "accel", 00:18:39.009 "config": [ 00:18:39.009 { 00:18:39.009 "method": "accel_set_options", 00:18:39.009 "params": { 00:18:39.009 "small_cache_size": 128, 00:18:39.009 "large_cache_size": 16, 00:18:39.009 "task_count": 2048, 00:18:39.009 "sequence_count": 2048, 00:18:39.009 "buf_count": 2048 00:18:39.009 } 00:18:39.009 } 00:18:39.009 ] 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "subsystem": "bdev", 00:18:39.009 "config": [ 00:18:39.009 { 00:18:39.009 "method": "bdev_set_options", 00:18:39.009 "params": { 00:18:39.009 "bdev_io_pool_size": 65535, 00:18:39.009 "bdev_io_cache_size": 256, 00:18:39.009 "bdev_auto_examine": true, 00:18:39.009 "iobuf_small_cache_size": 128, 00:18:39.009 "iobuf_large_cache_size": 16 00:18:39.009 } 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "method": "bdev_raid_set_options", 00:18:39.009 "params": { 00:18:39.009 "process_window_size_kb": 1024, 00:18:39.009 "process_max_bandwidth_mb_sec": 0 00:18:39.009 } 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "method": "bdev_iscsi_set_options", 00:18:39.009 "params": { 00:18:39.009 "timeout_sec": 30 00:18:39.009 } 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "method": "bdev_nvme_set_options", 00:18:39.009 "params": { 00:18:39.009 "action_on_timeout": "none", 00:18:39.009 "timeout_us": 0, 00:18:39.009 "timeout_admin_us": 0, 00:18:39.009 "keep_alive_timeout_ms": 10000, 00:18:39.009 "arbitration_burst": 0, 00:18:39.009 "low_priority_weight": 0, 00:18:39.009 "medium_priority_weight": 0, 00:18:39.009 "high_priority_weight": 0, 00:18:39.009 "nvme_adminq_poll_period_us": 10000, 00:18:39.009 "nvme_ioq_poll_period_us": 0, 00:18:39.009 "io_queue_requests": 0, 00:18:39.009 "delay_cmd_submit": true, 00:18:39.009 "transport_retry_count": 4, 00:18:39.009 "bdev_retry_count": 3, 00:18:39.009 "transport_ack_timeout": 0, 00:18:39.009 "ctrlr_loss_timeout_sec": 0, 00:18:39.009 "reconnect_delay_sec": 0, 00:18:39.009 "fast_io_fail_timeout_sec": 0, 00:18:39.009 "disable_auto_failback": false, 00:18:39.009 "generate_uuids": false, 00:18:39.009 "transport_tos": 0, 00:18:39.009 "nvme_error_stat": false, 00:18:39.009 "rdma_srq_size": 0, 00:18:39.009 "io_path_stat": false, 00:18:39.009 "allow_accel_sequence": false, 00:18:39.009 "rdma_max_cq_size": 0, 00:18:39.009 "rdma_cm_event_timeout_ms": 0, 00:18:39.009 "dhchap_digests": [ 00:18:39.009 "sha256", 00:18:39.009 "sha384", 00:18:39.009 "sha512" 00:18:39.009 ], 00:18:39.009 "dhchap_dhgroups": [ 00:18:39.009 "null", 00:18:39.009 "ffdhe2048", 00:18:39.009 "ffdhe3072", 00:18:39.009 "ffdhe4096", 00:18:39.009 "ffdhe6144", 00:18:39.009 "ffdhe8192" 00:18:39.009 ] 00:18:39.009 } 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "method": "bdev_nvme_set_hotplug", 00:18:39.009 "params": { 00:18:39.009 "period_us": 100000, 00:18:39.009 "enable": false 00:18:39.009 } 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "method": "bdev_malloc_create", 00:18:39.009 "params": { 00:18:39.009 "name": "malloc0", 00:18:39.009 "num_blocks": 8192, 00:18:39.009 "block_size": 4096, 00:18:39.009 "physical_block_size": 4096, 00:18:39.009 "uuid": "2319e8a6-63bd-4609-a594-f2c0feae0955", 00:18:39.009 "optimal_io_boundary": 0, 00:18:39.009 "md_size": 0, 00:18:39.009 "dif_type": 0, 00:18:39.009 "dif_is_head_of_md": false, 00:18:39.009 "dif_pi_format": 0 00:18:39.009 } 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "method": "bdev_wait_for_examine" 00:18:39.009 } 00:18:39.009 ] 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "subsystem": "nbd", 00:18:39.009 "config": [] 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "subsystem": "scheduler", 00:18:39.009 "config": [ 00:18:39.009 { 00:18:39.009 "method": "framework_set_scheduler", 00:18:39.009 "params": { 00:18:39.009 "name": "static" 00:18:39.009 } 00:18:39.009 } 00:18:39.009 ] 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "subsystem": "nvmf", 00:18:39.009 "config": [ 00:18:39.009 { 00:18:39.009 "method": "nvmf_set_config", 00:18:39.009 "params": { 00:18:39.009 "discovery_filter": "match_any", 00:18:39.009 "admin_cmd_passthru": { 00:18:39.009 "identify_ctrlr": false 00:18:39.009 }, 00:18:39.009 "dhchap_digests": [ 00:18:39.009 "sha256", 00:18:39.009 "sha384", 00:18:39.009 "sha512" 00:18:39.009 ], 00:18:39.009 "dhchap_dhgroups": [ 00:18:39.009 "null", 00:18:39.009 "ffdhe2048", 00:18:39.009 "ffdhe3072", 00:18:39.009 "ffdhe4096", 00:18:39.009 "ffdhe6144", 00:18:39.009 "ffdhe8192" 00:18:39.009 ] 00:18:39.009 } 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "method": "nvmf_set_max_subsystems", 00:18:39.009 "params": { 00:18:39.009 "max_ 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.009 subsystems": 1024 00:18:39.009 } 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "method": "nvmf_set_crdt", 00:18:39.009 "params": { 00:18:39.009 "crdt1": 0, 00:18:39.009 "crdt2": 0, 00:18:39.009 "crdt3": 0 00:18:39.009 } 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "method": "nvmf_create_transport", 00:18:39.009 "params": { 00:18:39.009 "trtype": "TCP", 00:18:39.009 "max_queue_depth": 128, 00:18:39.009 "max_io_qpairs_per_ctrlr": 127, 00:18:39.009 "in_capsule_data_size": 4096, 00:18:39.009 "max_io_size": 131072, 00:18:39.009 "io_unit_size": 131072, 00:18:39.009 "max_aq_depth": 128, 00:18:39.009 "num_shared_buffers": 511, 00:18:39.009 "buf_cache_size": 4294967295, 00:18:39.009 "dif_insert_or_strip": false, 00:18:39.009 "zcopy": false, 00:18:39.009 "c2h_success": false, 00:18:39.009 "sock_priority": 0, 00:18:39.009 "abort_timeout_sec": 1, 00:18:39.009 "ack_timeout": 0, 00:18:39.009 "data_wr_pool_size": 0 00:18:39.009 } 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "method": "nvmf_create_subsystem", 00:18:39.009 "params": { 00:18:39.009 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.009 "allow_any_host": false, 00:18:39.009 "serial_number": "00000000000000000000", 00:18:39.009 "model_number": "SPDK bdev Controller", 00:18:39.009 "max_namespaces": 32, 00:18:39.009 "min_cntlid": 1, 00:18:39.009 "max_cntlid": 65519, 00:18:39.009 "ana_reporting": false 00:18:39.009 } 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "method": "nvmf_subsystem_add_host", 00:18:39.009 "params": { 00:18:39.009 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.009 "host": "nqn.2016-06.io.spdk:host1", 00:18:39.009 "psk": "key0" 00:18:39.009 } 00:18:39.009 }, 00:18:39.009 { 00:18:39.009 "method": "nvmf_subsystem_add_ns", 00:18:39.009 "params": { 00:18:39.009 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.009 "namespace": { 00:18:39.009 "nsid": 1, 00:18:39.009 "bdev_name": "malloc0", 00:18:39.009 "nguid": "2319E8A663BD4609A594F2C0FEAE0955", 00:18:39.009 "uuid": "2319e8a6-63bd-4609-a594-f2c0feae0955", 00:18:39.009 "no_auto_visible": false 00:18:39.009 } 00:18:39.010 } 00:18:39.010 }, 00:18:39.010 { 00:18:39.010 "method": "nvmf_subsystem_add_listener", 00:18:39.010 "params": { 00:18:39.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.010 "listen_address": { 00:18:39.010 "trtype": "TCP", 00:18:39.010 "adrfam": "IPv4", 00:18:39.010 "traddr": "10.0.0.3", 00:18:39.010 "trsvcid": "4420" 00:18:39.010 }, 00:18:39.010 "secure_channel": false, 00:18:39.010 "sock_impl": "ssl" 00:18:39.010 } 00:18:39.010 } 00:18:39.010 ] 00:18:39.010 } 00:18:39.010 ] 00:18:39.010 }' 00:18:39.010 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=75575 00:18:39.010 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:39.010 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 75575 00:18:39.010 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75575 ']' 00:18:39.010 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.010 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.010 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.010 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.010 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.010 [2024-09-28 08:56:16.852215] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:39.010 [2024-09-28 08:56:16.852408] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.269 [2024-09-28 08:56:17.016862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.269 [2024-09-28 08:56:17.167159] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.269 [2024-09-28 08:56:17.167239] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.269 [2024-09-28 08:56:17.167274] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.269 [2024-09-28 08:56:17.167289] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.269 [2024-09-28 08:56:17.167300] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.269 [2024-09-28 08:56:17.167418] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.529 [2024-09-28 08:56:17.441742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:39.789 [2024-09-28 08:56:17.598522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.789 [2024-09-28 08:56:17.630487] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:39.789 [2024-09-28 08:56:17.630785] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:39.789 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:39.789 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:39.789 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:39.789 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:39.789 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.789 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.789 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=75607 00:18:39.789 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 75607 /var/tmp/bdevperf.sock 00:18:39.789 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 75607 ']' 00:18:39.789 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.789 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:39.789 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.789 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:39.789 "subsystems": [ 00:18:39.789 { 00:18:39.789 "subsystem": "keyring", 00:18:39.789 "config": [ 00:18:39.789 { 00:18:39.789 "method": "keyring_file_add_key", 00:18:39.789 "params": { 00:18:39.789 "name": "key0", 00:18:39.789 "path": "/tmp/tmp.txqqXu9dzf" 00:18:39.789 } 00:18:39.789 } 00:18:39.789 ] 00:18:39.789 }, 00:18:39.789 { 00:18:39.789 "subsystem": "iobuf", 00:18:39.789 "config": [ 00:18:39.789 { 00:18:39.789 "method": "iobuf_set_options", 00:18:39.789 "params": { 00:18:39.789 "small_pool_count": 8192, 00:18:39.789 "large_pool_count": 1024, 00:18:39.789 "small_bufsize": 8192, 00:18:39.789 "large_bufsize": 135168 00:18:39.789 } 00:18:39.789 } 00:18:39.789 ] 00:18:39.789 }, 00:18:39.789 { 00:18:39.789 "subsystem": "sock", 00:18:39.789 "config": [ 00:18:39.789 { 00:18:39.789 "method": "sock_set_default_impl", 00:18:39.789 "params": { 00:18:39.789 "impl_name": "uring" 00:18:39.789 } 00:18:39.789 }, 00:18:39.789 { 00:18:39.789 "method": "sock_impl_set_options", 00:18:39.789 "params": { 00:18:39.789 "impl_name": "ssl", 00:18:39.789 "recv_buf_size": 4096, 00:18:39.789 "send_buf_size": 4096, 00:18:39.789 "enable_recv_pipe": true, 00:18:39.789 "enable_quickack": false, 00:18:39.789 "enable_placement_id": 0, 00:18:39.789 "enable_zerocopy_send_server": true, 00:18:39.789 "enable_zerocopy_send_client": false, 00:18:39.789 "zerocopy_threshold": 0, 00:18:39.789 "tls_version": 0, 00:18:39.789 "enable_ktls": false 00:18:39.789 } 00:18:39.789 }, 00:18:39.789 { 00:18:39.789 "method": "sock_impl_set_options", 00:18:39.789 "params": { 00:18:39.789 "impl_name": "posix", 00:18:39.790 "recv_buf_size": 2097152, 00:18:39.790 "send_buf_size": 2097152, 00:18:39.790 "enable_recv_pipe": true, 00:18:39.790 "enable_quickack": false, 00:18:39.790 "enable_placement_id": 0, 00:18:39.790 "enable_zerocopy_send_server": true, 00:18:39.790 "enable_zerocopy_send_client": false, 00:18:39.790 "zerocopy_threshold": 0, 00:18:39.790 "tls_version": 0, 00:18:39.790 "enable_ktls": false 00:18:39.790 } 00:18:39.790 }, 00:18:39.790 { 00:18:39.790 "method": "sock_impl_set_options", 00:18:39.790 "params": { 00:18:39.790 "impl_name": "uring", 00:18:39.790 "recv_buf_size": 2097152, 00:18:39.790 "send_buf_size": 2097152, 00:18:39.790 "enable_recv_pipe": true, 00:18:39.790 "enable_quickack": false, 00:18:39.790 "enable_placement_id": 0, 00:18:39.790 "enable_zerocopy_send_server": false, 00:18:39.790 "enable_zerocopy_send_client": false, 00:18:39.790 "zerocopy_threshold": 0, 00:18:39.790 "tls_version": 0, 00:18:39.790 "enable_ktls": false 00:18:39.790 } 00:18:39.790 } 00:18:39.790 ] 00:18:39.790 }, 00:18:39.790 { 00:18:39.790 "subsystem": "vmd", 00:18:39.790 "config": [] 00:18:39.790 }, 00:18:39.790 { 00:18:39.790 "subsystem": "accel", 00:18:39.790 "config": [ 00:18:39.790 { 00:18:39.790 "method": "accel_set_options", 00:18:39.790 "params": { 00:18:39.790 "small_cache_size": 128, 00:18:39.790 "large_cache_size": 16, 00:18:39.790 "task_count": 2048, 00:18:39.790 "sequence_count": 2048, 00:18:39.790 "buf_count": 2048 00:18:39.790 } 00:18:39.790 } 00:18:39.790 ] 00:18:39.790 }, 00:18:39.790 { 00:18:39.790 "subsystem": "bdev", 00:18:39.790 "config": [ 00:18:39.790 { 00:18:39.790 "method": "bdev_set_options", 00:18:39.790 "params": { 00:18:39.790 "bdev_io_pool_size": 65535, 00:18:39.790 "bdev_io_cache_size": 256, 00:18:39.790 "bdev_auto_examine": true, 00:18:39.790 "iobuf_small_cache_size": 128, 00:18:39.790 "iobuf_large_cache_size": 16 00:18:39.790 } 00:18:39.790 }, 00:18:39.790 { 00:18:39.790 "method": "bdev_raid_set_options", 00:18:39.790 "params": { 00:18:39.790 "process_window_size_kb": 1024, 00:18:39.790 "process_max_bandwidth_mb_sec": 0 00:18:39.790 } 00:18:39.790 }, 00:18:39.790 { 00:18:39.790 "method": "bdev_iscsi_set_options", 00:18:39.790 "params": { 00:18:39.790 "timeout_sec": 30 00:18:39.790 } 00:18:39.790 }, 00:18:39.790 { 00:18:39.790 "method": "bdev_nvme_set_options", 00:18:39.790 "params": { 00:18:39.790 "action_on_timeout": "none", 00:18:39.790 "timeout_us": 0, 00:18:39.790 "timeout_admin_us": 0, 00:18:39.790 "keep_alive_timeout_ms": 10000, 00:18:39.790 "arbitration_burst": 0, 00:18:39.790 "low_priority_weight": 0, 00:18:39.790 "medium_priority_weight": 0, 00:18:39.791 "high_priority_weight": 0, 00:18:39.791 "nvme_adminq_poll_period_us": 10000, 00:18:39.791 "nvme_ioq_poll_period_us": 0, 00:18:39.791 "io_queue_requests": 512, 00:18:39.791 "delay_cmd_submit": true, 00:18:39.791 "transport_retry_count": 4, 00:18:39.791 "bdev_retry_count": 3, 00:18:39.791 "transport_ack_timeout": 0, 00:18:39.791 "ctrlr_loss_timeout_sec": 0, 00:18:39.791 "reconnect_delay_sec": 0, 00:18:39.791 "fast_io_fail_timeout_sec": 0, 00:18:39.791 "disable_auto_failback": false, 00:18:39.791 "generate_uuids": false, 00:18:39.791 "transport_tos": 0, 00:18:39.791 "nvme_error_stat": false, 00:18:39.791 "rdma_srq_size": 0, 00:18:39.791 "io_path_stat": false, 00:18:39.791 "allow_accel_sequence": false, 00:18:39.791 "rdma_max_cq_size": 0, 00:18:39.791 "rdma_cm_event_timeout_ms": 0, 00:18:39.791 "dhchap_digests": [ 00:18:39.791 "sha256", 00:18:39.791 "sha384", 00:18:39.791 "sha512" 00:18:39.791 ], 00:18:39.791 "dhchap_dhgroups": [ 00:18:39.791 "null", 00:18:39.791 "ffdhe2048", 00:18:39.791 "ffdhe3072", 00:18:39.791 "ffdhe4096", 00:18:39.791 "ffdhe6144", 00:18:39.791 "ffdhe8192" 00:18:39.791 ] 00:18:39.791 } 00:18:39.791 }, 00:18:39.791 { 00:18:39.791 "method": "bdev_nvme_attach_controller", 00:18:39.791 "params": { 00:18:39.791 "name": "nvme0", 00:18:39.791 "trtype": "TCP", 00:18:39.791 "adrfam": "IPv4", 00:18:39.791 "traddr": "10.0.0.3", 00:18:39.791 "trsvcid": "4420", 00:18:39.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.791 "prchk_reftag": false, 00:18:39.791 "prchk_guard": false, 00:18:39.791 "ctrlr_loss_timeout_sec": 0, 00:18:39.791 "reconnect_delay_sec": 0, 00:18:39.791 "fast_io_fail_timeout_sec": 0, 00:18:39.791 "psk": "key0", 00:18:39.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.791 "hdgst": false, 00:18:39.791 "ddgst": false 00:18:39.791 } 00:18:39.791 }, 00:18:39.791 { 00:18:39.791 "method": "bdev_nvme_set_hotplug", 00:18:39.791 "params": { 00:18:39.791 "period_us": 100000, 00:18:39.791 "enable": false 00:18:39.791 } 00:18:39.791 }, 00:18:39.791 { 00:18:39.791 "method": "bdev_enable_histogram", 00:18:39.791 "params": { 00:18:39.791 "name": "nvme0n1", 00:18:39.791 "enable": true 00:18:39.791 } 00:18:39.791 }, 00:18:39.791 { 00:18:39.791 "method": "bdev_wait_for_examine" 00:18:39.791 } 00:18:39.791 ] 00:18:39.791 }, 00:18:39.791 { 00:18:39.791 "subsystem": "nbd", 00:18:39.791 "config": [] 00:18:39.791 } 00:18:39.791 ] 00:18:39.791 }' 00:18:39.791 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.791 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.791 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.051 [2024-09-28 08:56:17.866699] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:40.051 [2024-09-28 08:56:17.866924] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75607 ] 00:18:40.051 [2024-09-28 08:56:18.038522] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.310 [2024-09-28 08:56:18.197923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.570 [2024-09-28 08:56:18.444295] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:40.570 [2024-09-28 08:56:18.546661] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:40.829 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.829 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:40.829 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:40.829 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:41.088 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.088 08:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:41.088 Running I/O for 1 seconds... 00:18:42.465 3200.00 IOPS, 12.50 MiB/s 00:18:42.466 Latency(us) 00:18:42.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.466 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:42.466 Verification LBA range: start 0x0 length 0x2000 00:18:42.466 nvme0n1 : 1.04 3206.65 12.53 0.00 0.00 39356.20 8281.37 26095.24 00:18:42.466 =================================================================================================================== 00:18:42.466 Total : 3206.65 12.53 0.00 0.00 39356.20 8281.37 26095.24 00:18:42.466 { 00:18:42.466 "results": [ 00:18:42.466 { 00:18:42.466 "job": "nvme0n1", 00:18:42.466 "core_mask": "0x2", 00:18:42.466 "workload": "verify", 00:18:42.466 "status": "finished", 00:18:42.466 "verify_range": { 00:18:42.466 "start": 0, 00:18:42.466 "length": 8192 00:18:42.466 }, 00:18:42.466 "queue_depth": 128, 00:18:42.466 "io_size": 4096, 00:18:42.466 "runtime": 1.037843, 00:18:42.466 "iops": 3206.650716919611, 00:18:42.466 "mibps": 12.525979362967231, 00:18:42.466 "io_failed": 0, 00:18:42.466 "io_timeout": 0, 00:18:42.466 "avg_latency_us": 39356.20475524476, 00:18:42.466 "min_latency_us": 8281.367272727273, 00:18:42.466 "max_latency_us": 26095.243636363637 00:18:42.466 } 00:18:42.466 ], 00:18:42.466 "core_count": 1 00:18:42.466 } 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:42.466 nvmf_trace.0 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 75607 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75607 ']' 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75607 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75607 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:42.466 killing process with pid 75607 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75607' 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75607 00:18:42.466 Received shutdown signal, test time was about 1.000000 seconds 00:18:42.466 00:18:42.466 Latency(us) 00:18:42.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.466 =================================================================================================================== 00:18:42.466 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.466 08:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75607 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.404 rmmod nvme_tcp 00:18:43.404 rmmod nvme_fabrics 00:18:43.404 rmmod nvme_keyring 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 75575 ']' 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 75575 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 75575 ']' 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 75575 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:43.404 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75575 00:18:43.668 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:43.668 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:43.668 killing process with pid 75575 00:18:43.668 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75575' 00:18:43.668 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 75575 00:18:43.668 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 75575 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.659 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.917 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:44.917 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.917 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.917 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.917 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:18:44.917 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.frY9N5uuFJ /tmp/tmp.Uwzilhycp4 /tmp/tmp.txqqXu9dzf 00:18:44.917 00:18:44.917 real 1m48.797s 00:18:44.917 user 2m59.272s 00:18:44.917 sys 0m27.069s 00:18:44.917 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:44.917 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.917 ************************************ 00:18:44.917 END TEST nvmf_tls 00:18:44.917 ************************************ 00:18:44.917 08:56:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:44.917 08:56:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:44.917 08:56:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:44.917 08:56:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:44.917 ************************************ 00:18:44.917 START TEST nvmf_fips 00:18:44.917 ************************************ 00:18:44.917 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:44.917 * Looking for test storage... 00:18:44.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:44.918 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:44.918 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:18:44.918 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:45.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.177 --rc genhtml_branch_coverage=1 00:18:45.177 --rc genhtml_function_coverage=1 00:18:45.177 --rc genhtml_legend=1 00:18:45.177 --rc geninfo_all_blocks=1 00:18:45.177 --rc geninfo_unexecuted_blocks=1 00:18:45.177 00:18:45.177 ' 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:45.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.177 --rc genhtml_branch_coverage=1 00:18:45.177 --rc genhtml_function_coverage=1 00:18:45.177 --rc genhtml_legend=1 00:18:45.177 --rc geninfo_all_blocks=1 00:18:45.177 --rc geninfo_unexecuted_blocks=1 00:18:45.177 00:18:45.177 ' 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:45.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.177 --rc genhtml_branch_coverage=1 00:18:45.177 --rc genhtml_function_coverage=1 00:18:45.177 --rc genhtml_legend=1 00:18:45.177 --rc geninfo_all_blocks=1 00:18:45.177 --rc geninfo_unexecuted_blocks=1 00:18:45.177 00:18:45.177 ' 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:45.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.177 --rc genhtml_branch_coverage=1 00:18:45.177 --rc genhtml_function_coverage=1 00:18:45.177 --rc genhtml_legend=1 00:18:45.177 --rc geninfo_all_blocks=1 00:18:45.177 --rc geninfo_unexecuted_blocks=1 00:18:45.177 00:18:45.177 ' 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.177 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.178 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.178 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:45.178 Error setting digest 00:18:45.178 403248AA2C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:45.178 403248AA2C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:45.178 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:45.179 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:45.179 Cannot find device "nvmf_init_br" 00:18:45.437 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:18:45.437 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:45.437 Cannot find device "nvmf_init_br2" 00:18:45.437 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:18:45.437 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:45.437 Cannot find device "nvmf_tgt_br" 00:18:45.437 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:18:45.437 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:45.437 Cannot find device "nvmf_tgt_br2" 00:18:45.437 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:18:45.437 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:45.437 Cannot find device "nvmf_init_br" 00:18:45.437 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:18:45.437 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:45.438 Cannot find device "nvmf_init_br2" 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:45.438 Cannot find device "nvmf_tgt_br" 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:45.438 Cannot find device "nvmf_tgt_br2" 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:45.438 Cannot find device "nvmf_br" 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:45.438 Cannot find device "nvmf_init_if" 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:45.438 Cannot find device "nvmf_init_if2" 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:45.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:45.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:45.438 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:45.697 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:45.697 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:18:45.697 00:18:45.697 --- 10.0.0.3 ping statistics --- 00:18:45.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.697 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:45.697 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:45.697 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:18:45.697 00:18:45.697 --- 10.0.0.4 ping statistics --- 00:18:45.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.697 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:45.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:18:45.697 00:18:45.697 --- 10.0.0.1 ping statistics --- 00:18:45.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.697 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:45.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:18:45.697 00:18:45.697 --- 10.0.0.2 ping statistics --- 00:18:45.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.697 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=75948 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 75948 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 75948 ']' 00:18:45.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:45.697 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:45.956 [2024-09-28 08:56:23.738000] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:45.956 [2024-09-28 08:56:23.738408] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.956 [2024-09-28 08:56:23.915258] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.215 [2024-09-28 08:56:24.143323] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.215 [2024-09-28 08:56:24.143659] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.215 [2024-09-28 08:56:24.143700] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.215 [2024-09-28 08:56:24.143719] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.215 [2024-09-28 08:56:24.143735] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.215 [2024-09-28 08:56:24.143784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.474 [2024-09-28 08:56:24.308349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:46.733 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:46.733 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:46.733 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:18:46.733 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:46.733 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:46.991 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.991 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:46.991 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:46.991 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:46.991 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.4iI 00:18:46.991 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:46.991 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.4iI 00:18:46.991 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.4iI 00:18:46.991 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.4iI 00:18:46.991 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:47.250 [2024-09-28 08:56:25.024670] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.250 [2024-09-28 08:56:25.040581] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.250 [2024-09-28 08:56:25.041048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:47.250 malloc0 00:18:47.250 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:47.250 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=75984 00:18:47.250 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:47.250 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 75984 /var/tmp/bdevperf.sock 00:18:47.250 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 75984 ']' 00:18:47.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.250 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.250 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:47.250 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.250 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:47.250 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:47.508 [2024-09-28 08:56:25.297593] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:47.508 [2024-09-28 08:56:25.298041] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75984 ] 00:18:47.508 [2024-09-28 08:56:25.469401] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.766 [2024-09-28 08:56:25.664698] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.024 [2024-09-28 08:56:25.821293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:48.282 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:48.282 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:48.282 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.4iI 00:18:48.541 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:48.800 [2024-09-28 08:56:26.595648] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.800 TLSTESTn1 00:18:48.800 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:49.059 Running I/O for 10 seconds... 00:18:58.981 3193.00 IOPS, 12.47 MiB/s 3164.50 IOPS, 12.36 MiB/s 3114.67 IOPS, 12.17 MiB/s 3079.25 IOPS, 12.03 MiB/s 3038.60 IOPS, 11.87 MiB/s 3021.00 IOPS, 11.80 MiB/s 3055.00 IOPS, 11.93 MiB/s 3097.38 IOPS, 12.10 MiB/s 3127.56 IOPS, 12.22 MiB/s 3141.80 IOPS, 12.27 MiB/s 00:18:58.981 Latency(us) 00:18:58.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.981 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:58.981 Verification LBA range: start 0x0 length 0x2000 00:18:58.981 TLSTESTn1 : 10.02 3148.20 12.30 0.00 0.00 40587.80 6434.44 31457.28 00:18:58.981 =================================================================================================================== 00:18:58.981 Total : 3148.20 12.30 0.00 0.00 40587.80 6434.44 31457.28 00:18:58.981 { 00:18:58.981 "results": [ 00:18:58.981 { 00:18:58.981 "job": "TLSTESTn1", 00:18:58.981 "core_mask": "0x4", 00:18:58.981 "workload": "verify", 00:18:58.981 "status": "finished", 00:18:58.981 "verify_range": { 00:18:58.981 "start": 0, 00:18:58.981 "length": 8192 00:18:58.981 }, 00:18:58.981 "queue_depth": 128, 00:18:58.981 "io_size": 4096, 00:18:58.981 "runtime": 10.020341, 00:18:58.981 "iops": 3148.196253999739, 00:18:58.981 "mibps": 12.29764161718648, 00:18:58.981 "io_failed": 0, 00:18:58.981 "io_timeout": 0, 00:18:58.981 "avg_latency_us": 40587.79547927125, 00:18:58.981 "min_latency_us": 6434.443636363636, 00:18:58.981 "max_latency_us": 31457.28 00:18:58.981 } 00:18:58.981 ], 00:18:58.981 "core_count": 1 00:18:58.981 } 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:58.981 nvmf_trace.0 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 75984 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 75984 ']' 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 75984 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:58.981 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75984 00:18:59.240 killing process with pid 75984 00:18:59.240 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.240 00:18:59.240 Latency(us) 00:18:59.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.240 =================================================================================================================== 00:18:59.240 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.240 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:59.240 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:59.240 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75984' 00:18:59.240 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 75984 00:18:59.240 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 75984 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:00.177 rmmod nvme_tcp 00:19:00.177 rmmod nvme_fabrics 00:19:00.177 rmmod nvme_keyring 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 75948 ']' 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 75948 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 75948 ']' 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 75948 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75948 00:19:00.177 killing process with pid 75948 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75948' 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 75948 00:19:00.177 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 75948 00:19:01.552 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.4iI 00:19:01.553 ************************************ 00:19:01.553 END TEST nvmf_fips 00:19:01.553 ************************************ 00:19:01.553 00:19:01.553 real 0m16.713s 00:19:01.553 user 0m23.852s 00:19:01.553 sys 0m5.473s 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:01.553 ************************************ 00:19:01.553 START TEST nvmf_control_msg_list 00:19:01.553 ************************************ 00:19:01.553 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:01.813 * Looking for test storage... 00:19:01.813 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:01.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.813 --rc genhtml_branch_coverage=1 00:19:01.813 --rc genhtml_function_coverage=1 00:19:01.813 --rc genhtml_legend=1 00:19:01.813 --rc geninfo_all_blocks=1 00:19:01.813 --rc geninfo_unexecuted_blocks=1 00:19:01.813 00:19:01.813 ' 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:01.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.813 --rc genhtml_branch_coverage=1 00:19:01.813 --rc genhtml_function_coverage=1 00:19:01.813 --rc genhtml_legend=1 00:19:01.813 --rc geninfo_all_blocks=1 00:19:01.813 --rc geninfo_unexecuted_blocks=1 00:19:01.813 00:19:01.813 ' 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:01.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.813 --rc genhtml_branch_coverage=1 00:19:01.813 --rc genhtml_function_coverage=1 00:19:01.813 --rc genhtml_legend=1 00:19:01.813 --rc geninfo_all_blocks=1 00:19:01.813 --rc geninfo_unexecuted_blocks=1 00:19:01.813 00:19:01.813 ' 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:01.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.813 --rc genhtml_branch_coverage=1 00:19:01.813 --rc genhtml_function_coverage=1 00:19:01.813 --rc genhtml_legend=1 00:19:01.813 --rc geninfo_all_blocks=1 00:19:01.813 --rc geninfo_unexecuted_blocks=1 00:19:01.813 00:19:01.813 ' 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:01.813 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:01.814 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:01.814 Cannot find device "nvmf_init_br" 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:01.814 Cannot find device "nvmf_init_br2" 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:01.814 Cannot find device "nvmf_tgt_br" 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:01.814 Cannot find device "nvmf_tgt_br2" 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:19:01.814 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:02.073 Cannot find device "nvmf_init_br" 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:02.073 Cannot find device "nvmf_init_br2" 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:02.073 Cannot find device "nvmf_tgt_br" 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:02.073 Cannot find device "nvmf_tgt_br2" 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:02.073 Cannot find device "nvmf_br" 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:02.073 Cannot find device "nvmf_init_if" 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:02.073 Cannot find device "nvmf_init_if2" 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:02.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:02.073 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:02.073 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:02.073 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:02.073 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:02.073 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:02.073 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:02.073 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:02.073 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:02.073 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:02.073 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:02.332 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:02.332 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:02.332 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:02.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:02.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:19:02.333 00:19:02.333 --- 10.0.0.3 ping statistics --- 00:19:02.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.333 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:02.333 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:02.333 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:19:02.333 00:19:02.333 --- 10.0.0.4 ping statistics --- 00:19:02.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.333 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:02.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:19:02.333 00:19:02.333 --- 10.0.0.1 ping statistics --- 00:19:02.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.333 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:02.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:19:02.333 00:19:02.333 --- 10.0.0.2 ping statistics --- 00:19:02.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.333 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:02.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=76395 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 76395 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 76395 ']' 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.333 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:02.333 [2024-09-28 08:56:40.290892] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:02.333 [2024-09-28 08:56:40.291316] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.592 [2024-09-28 08:56:40.464423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.851 [2024-09-28 08:56:40.667994] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.851 [2024-09-28 08:56:40.668249] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.851 [2024-09-28 08:56:40.668393] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.851 [2024-09-28 08:56:40.668529] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.851 [2024-09-28 08:56:40.668573] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.851 [2024-09-28 08:56:40.668686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.851 [2024-09-28 08:56:40.820370] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:03.419 [2024-09-28 08:56:41.295134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:03.419 Malloc0 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:03.419 [2024-09-28 08:56:41.369968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=76427 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=76428 00:19:03.419 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:03.420 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=76429 00:19:03.420 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:03.420 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 76427 00:19:03.679 [2024-09-28 08:56:41.590701] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:03.679 [2024-09-28 08:56:41.610950] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:03.679 [2024-09-28 08:56:41.621181] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:05.057 Initializing NVMe Controllers 00:19:05.057 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:05.057 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:05.057 Initialization complete. Launching workers. 00:19:05.057 ======================================================== 00:19:05.057 Latency(us) 00:19:05.057 Device Information : IOPS MiB/s Average min max 00:19:05.057 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2787.00 10.89 358.20 161.68 1583.05 00:19:05.057 ======================================================== 00:19:05.057 Total : 2787.00 10.89 358.20 161.68 1583.05 00:19:05.057 00:19:05.057 Initializing NVMe Controllers 00:19:05.057 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:05.057 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:05.057 Initialization complete. Launching workers. 00:19:05.057 ======================================================== 00:19:05.057 Latency(us) 00:19:05.057 Device Information : IOPS MiB/s Average min max 00:19:05.057 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2790.97 10.90 357.72 193.25 847.30 00:19:05.057 ======================================================== 00:19:05.057 Total : 2790.97 10.90 357.72 193.25 847.30 00:19:05.057 00:19:05.057 Initializing NVMe Controllers 00:19:05.057 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:05.057 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:05.057 Initialization complete. Launching workers. 00:19:05.057 ======================================================== 00:19:05.057 Latency(us) 00:19:05.057 Device Information : IOPS MiB/s Average min max 00:19:05.057 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2808.95 10.97 355.49 143.81 894.11 00:19:05.057 ======================================================== 00:19:05.057 Total : 2808.95 10.97 355.49 143.81 894.11 00:19:05.057 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 76428 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 76429 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:05.057 rmmod nvme_tcp 00:19:05.057 rmmod nvme_fabrics 00:19:05.057 rmmod nvme_keyring 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 76395 ']' 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 76395 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 76395 ']' 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 76395 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76395 00:19:05.057 killing process with pid 76395 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76395' 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 76395 00:19:05.057 08:56:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 76395 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:05.993 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:05.994 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:19:06.252 00:19:06.252 real 0m4.601s 00:19:06.252 user 0m6.671s 00:19:06.252 sys 0m1.561s 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:06.252 ************************************ 00:19:06.252 END TEST nvmf_control_msg_list 00:19:06.252 ************************************ 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:06.252 ************************************ 00:19:06.252 START TEST nvmf_wait_for_buf 00:19:06.252 ************************************ 00:19:06.252 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:06.512 * Looking for test storage... 00:19:06.512 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.512 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:06.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.513 --rc genhtml_branch_coverage=1 00:19:06.513 --rc genhtml_function_coverage=1 00:19:06.513 --rc genhtml_legend=1 00:19:06.513 --rc geninfo_all_blocks=1 00:19:06.513 --rc geninfo_unexecuted_blocks=1 00:19:06.513 00:19:06.513 ' 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:06.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.513 --rc genhtml_branch_coverage=1 00:19:06.513 --rc genhtml_function_coverage=1 00:19:06.513 --rc genhtml_legend=1 00:19:06.513 --rc geninfo_all_blocks=1 00:19:06.513 --rc geninfo_unexecuted_blocks=1 00:19:06.513 00:19:06.513 ' 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:06.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.513 --rc genhtml_branch_coverage=1 00:19:06.513 --rc genhtml_function_coverage=1 00:19:06.513 --rc genhtml_legend=1 00:19:06.513 --rc geninfo_all_blocks=1 00:19:06.513 --rc geninfo_unexecuted_blocks=1 00:19:06.513 00:19:06.513 ' 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:06.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.513 --rc genhtml_branch_coverage=1 00:19:06.513 --rc genhtml_function_coverage=1 00:19:06.513 --rc genhtml_legend=1 00:19:06.513 --rc geninfo_all_blocks=1 00:19:06.513 --rc geninfo_unexecuted_blocks=1 00:19:06.513 00:19:06.513 ' 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:06.513 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:06.513 Cannot find device "nvmf_init_br" 00:19:06.513 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:06.514 Cannot find device "nvmf_init_br2" 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:06.514 Cannot find device "nvmf_tgt_br" 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:06.514 Cannot find device "nvmf_tgt_br2" 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:06.514 Cannot find device "nvmf_init_br" 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:06.514 Cannot find device "nvmf_init_br2" 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:06.514 Cannot find device "nvmf_tgt_br" 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:06.514 Cannot find device "nvmf_tgt_br2" 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:19:06.514 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:06.773 Cannot find device "nvmf_br" 00:19:06.773 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:19:06.773 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:06.773 Cannot find device "nvmf_init_if" 00:19:06.773 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:19:06.773 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:06.773 Cannot find device "nvmf_init_if2" 00:19:06.773 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:19:06.773 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:06.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.773 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:19:06.773 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:06.773 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:06.773 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:19:06.773 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:06.773 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:06.773 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:06.773 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:06.773 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:06.774 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:06.774 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:19:06.774 00:19:06.774 --- 10.0.0.3 ping statistics --- 00:19:06.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.774 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:06.774 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:06.774 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:19:06.774 00:19:06.774 --- 10.0.0.4 ping statistics --- 00:19:06.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.774 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:06.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:19:06.774 00:19:06.774 --- 10.0.0.1 ping statistics --- 00:19:06.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.774 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:06.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:19:06.774 00:19:06.774 --- 10.0.0.2 ping statistics --- 00:19:06.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.774 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:06.774 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:07.033 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:07.033 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:07.033 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:07.033 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:07.033 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=76681 00:19:07.033 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:07.033 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 76681 00:19:07.033 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 76681 ']' 00:19:07.033 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.033 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:07.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.033 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.033 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:07.033 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:07.033 [2024-09-28 08:56:44.911778] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:07.033 [2024-09-28 08:56:44.911954] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.292 [2024-09-28 08:56:45.083984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.292 [2024-09-28 08:56:45.244268] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.292 [2024-09-28 08:56:45.244362] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.292 [2024-09-28 08:56:45.244409] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.292 [2024-09-28 08:56:45.244426] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.292 [2024-09-28 08:56:45.244438] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.292 [2024-09-28 08:56:45.244487] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.231 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:08.231 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:19:08.231 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:08.231 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:08.231 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:08.231 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.231 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:08.231 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:08.231 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:08.231 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.232 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:08.232 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.232 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:08.232 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.232 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:08.232 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.232 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:08.232 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.232 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:08.232 [2024-09-28 08:56:46.067257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:08.232 Malloc0 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:08.232 [2024-09-28 08:56:46.201709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.232 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:08.232 [2024-09-28 08:56:46.225942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:08.491 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.491 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:19:08.491 [2024-09-28 08:56:46.462042] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:09.942 Initializing NVMe Controllers 00:19:09.942 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:19:09.942 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:09.942 Initialization complete. Launching workers. 00:19:09.942 ======================================================== 00:19:09.942 Latency(us) 00:19:09.942 Device Information : IOPS MiB/s Average min max 00:19:09.942 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 490.08 61.26 8160.92 7638.80 12046.14 00:19:09.942 ======================================================== 00:19:09.942 Total : 490.08 61.26 8160.92 7638.80 12046.14 00:19:09.942 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4674 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4674 -eq 0 ]] 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:09.942 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:09.942 rmmod nvme_tcp 00:19:09.942 rmmod nvme_fabrics 00:19:09.942 rmmod nvme_keyring 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 76681 ']' 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 76681 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 76681 ']' 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 76681 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76681 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76681' 00:19:10.201 killing process with pid 76681 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 76681 00:19:10.201 08:56:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 76681 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:11.136 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:11.136 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:11.136 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:11.136 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:11.136 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:11.136 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:11.136 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.136 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.136 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.394 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:19:11.395 00:19:11.395 real 0m4.953s 00:19:11.395 user 0m4.469s 00:19:11.395 sys 0m0.963s 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:11.395 ************************************ 00:19:11.395 END TEST nvmf_wait_for_buf 00:19:11.395 ************************************ 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:11.395 ************************************ 00:19:11.395 START TEST nvmf_fuzz 00:19:11.395 ************************************ 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:19:11.395 * Looking for test storage... 00:19:11.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:11.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.395 --rc genhtml_branch_coverage=1 00:19:11.395 --rc genhtml_function_coverage=1 00:19:11.395 --rc genhtml_legend=1 00:19:11.395 --rc geninfo_all_blocks=1 00:19:11.395 --rc geninfo_unexecuted_blocks=1 00:19:11.395 00:19:11.395 ' 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:11.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.395 --rc genhtml_branch_coverage=1 00:19:11.395 --rc genhtml_function_coverage=1 00:19:11.395 --rc genhtml_legend=1 00:19:11.395 --rc geninfo_all_blocks=1 00:19:11.395 --rc geninfo_unexecuted_blocks=1 00:19:11.395 00:19:11.395 ' 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:11.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.395 --rc genhtml_branch_coverage=1 00:19:11.395 --rc genhtml_function_coverage=1 00:19:11.395 --rc genhtml_legend=1 00:19:11.395 --rc geninfo_all_blocks=1 00:19:11.395 --rc geninfo_unexecuted_blocks=1 00:19:11.395 00:19:11.395 ' 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:11.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:11.395 --rc genhtml_branch_coverage=1 00:19:11.395 --rc genhtml_function_coverage=1 00:19:11.395 --rc genhtml_legend=1 00:19:11.395 --rc geninfo_all_blocks=1 00:19:11.395 --rc geninfo_unexecuted_blocks=1 00:19:11.395 00:19:11.395 ' 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.395 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.654 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:11.655 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:11.655 Cannot find device "nvmf_init_br" 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:11.655 Cannot find device "nvmf_init_br2" 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:11.655 Cannot find device "nvmf_tgt_br" 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:11.655 Cannot find device "nvmf_tgt_br2" 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:11.655 Cannot find device "nvmf_init_br" 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:11.655 Cannot find device "nvmf_init_br2" 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:11.655 Cannot find device "nvmf_tgt_br" 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:11.655 Cannot find device "nvmf_tgt_br2" 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:11.655 Cannot find device "nvmf_br" 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:11.655 Cannot find device "nvmf_init_if" 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:11.655 Cannot find device "nvmf_init_if2" 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:11.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:11.655 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:11.655 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:11.656 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:11.915 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:11.915 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:19:11.915 00:19:11.915 --- 10.0.0.3 ping statistics --- 00:19:11.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.915 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:11.915 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:11.915 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:19:11.915 00:19:11.915 --- 10.0.0.4 ping statistics --- 00:19:11.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.915 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:11.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:11.915 00:19:11.915 --- 10.0.0.1 ping statistics --- 00:19:11.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.915 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:11.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:19:11.915 00:19:11.915 --- 10.0.0.2 ping statistics --- 00:19:11.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.915 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # return 0 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=76984 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 76984 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 76984 ']' 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.915 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:13.292 Malloc0 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:19:13.292 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:19:13.860 Shutting down the fuzz application 00:19:13.860 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:19:14.427 Shutting down the fuzz application 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:14.686 rmmod nvme_tcp 00:19:14.686 rmmod nvme_fabrics 00:19:14.686 rmmod nvme_keyring 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 76984 ']' 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 76984 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 76984 ']' 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 76984 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76984 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:14.686 killing process with pid 76984 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76984' 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 76984 00:19:14.686 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 76984 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:19:16.065 00:19:16.065 real 0m4.749s 00:19:16.065 user 0m5.315s 00:19:16.065 sys 0m0.892s 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:16.065 ************************************ 00:19:16.065 END TEST nvmf_fuzz 00:19:16.065 ************************************ 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.065 ************************************ 00:19:16.065 START TEST nvmf_multiconnection 00:19:16.065 ************************************ 00:19:16.065 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:19:16.325 * Looking for test storage... 00:19:16.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:16.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.325 --rc genhtml_branch_coverage=1 00:19:16.325 --rc genhtml_function_coverage=1 00:19:16.325 --rc genhtml_legend=1 00:19:16.325 --rc geninfo_all_blocks=1 00:19:16.325 --rc geninfo_unexecuted_blocks=1 00:19:16.325 00:19:16.325 ' 00:19:16.325 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:16.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.325 --rc genhtml_branch_coverage=1 00:19:16.325 --rc genhtml_function_coverage=1 00:19:16.326 --rc genhtml_legend=1 00:19:16.326 --rc geninfo_all_blocks=1 00:19:16.326 --rc geninfo_unexecuted_blocks=1 00:19:16.326 00:19:16.326 ' 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:16.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.326 --rc genhtml_branch_coverage=1 00:19:16.326 --rc genhtml_function_coverage=1 00:19:16.326 --rc genhtml_legend=1 00:19:16.326 --rc geninfo_all_blocks=1 00:19:16.326 --rc geninfo_unexecuted_blocks=1 00:19:16.326 00:19:16.326 ' 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:16.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.326 --rc genhtml_branch_coverage=1 00:19:16.326 --rc genhtml_function_coverage=1 00:19:16.326 --rc genhtml_legend=1 00:19:16.326 --rc geninfo_all_blocks=1 00:19:16.326 --rc geninfo_unexecuted_blocks=1 00:19:16.326 00:19:16.326 ' 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:16.326 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:16.326 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:16.327 Cannot find device "nvmf_init_br" 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:16.327 Cannot find device "nvmf_init_br2" 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:16.327 Cannot find device "nvmf_tgt_br" 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:16.327 Cannot find device "nvmf_tgt_br2" 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:16.327 Cannot find device "nvmf_init_br" 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:16.327 Cannot find device "nvmf_init_br2" 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:16.327 Cannot find device "nvmf_tgt_br" 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:16.327 Cannot find device "nvmf_tgt_br2" 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:19:16.327 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:16.586 Cannot find device "nvmf_br" 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:16.586 Cannot find device "nvmf_init_if" 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:16.586 Cannot find device "nvmf_init_if2" 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:16.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:16.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:16.586 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:16.846 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:16.846 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:19:16.846 00:19:16.846 --- 10.0.0.3 ping statistics --- 00:19:16.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.846 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:16.846 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:16.846 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:19:16.846 00:19:16.846 --- 10.0.0.4 ping statistics --- 00:19:16.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.846 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:16.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:16.846 00:19:16.846 --- 10.0.0.1 ping statistics --- 00:19:16.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.846 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:16.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:19:16.846 00:19:16.846 --- 10.0.0.2 ping statistics --- 00:19:16.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.846 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # return 0 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=77242 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 77242 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 77242 ']' 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:16.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:16.846 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:16.846 [2024-09-28 08:56:54.761945] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:16.846 [2024-09-28 08:56:54.762371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.105 [2024-09-28 08:56:54.937893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:17.364 [2024-09-28 08:56:55.171494] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.364 [2024-09-28 08:56:55.171836] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.364 [2024-09-28 08:56:55.172021] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.364 [2024-09-28 08:56:55.172253] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.364 [2024-09-28 08:56:55.172395] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.364 [2024-09-28 08:56:55.172756] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.364 [2024-09-28 08:56:55.172950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.364 [2024-09-28 08:56:55.173528] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:17.364 [2024-09-28 08:56:55.173544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.623 [2024-09-28 08:56:55.359349] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:17.882 [2024-09-28 08:56:55.787044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:17.882 Malloc1 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.882 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 [2024-09-28 08:56:55.893360] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 Malloc2 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.141 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 Malloc3 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.141 Malloc4 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.141 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.401 Malloc5 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.401 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.402 Malloc6 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.402 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.662 Malloc7 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.662 Malloc8 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.662 Malloc9 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:19:18.662 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.663 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.922 Malloc10 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:19:18.922 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.923 Malloc11 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:18.923 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:19:19.182 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:19:19.182 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:19.182 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:19.182 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:19.182 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:21.087 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:21.087 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:21.087 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:19:21.087 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:21.087 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:21.087 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:21.087 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:21.087 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:19:21.346 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:19:21.346 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:21.346 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:21.346 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:21.346 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:23.252 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:23.252 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:23.252 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:19:23.252 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:23.252 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:23.252 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:23.252 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:23.252 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:19:23.511 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:19:23.511 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:23.511 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:23.511 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:23.511 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:25.415 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:25.415 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:25.415 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:19:25.415 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:25.415 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:25.415 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:25.415 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:25.415 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:19:25.674 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:19:25.674 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:25.674 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:25.674 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:25.674 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:27.578 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:27.578 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:27.578 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:19:27.578 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:27.578 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:27.578 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:27.578 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.578 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:19:27.850 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:19:27.850 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:27.850 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:27.850 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:27.850 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:29.766 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:29.766 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:29.766 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:19:29.766 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:29.766 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:29.766 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:29.766 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.766 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:19:29.767 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:19:29.767 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:29.767 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:29.767 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:29.767 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:32.299 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:32.299 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:32.299 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:19:32.299 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:32.299 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:32.299 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:32.299 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:32.299 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:19:32.299 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:19:32.299 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:32.299 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:32.299 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:32.299 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:34.202 08:57:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:34.202 08:57:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:34.202 08:57:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:19:34.202 08:57:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:34.202 08:57:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:34.202 08:57:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:34.202 08:57:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:34.202 08:57:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:19:34.202 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:19:34.202 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:34.202 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:34.202 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:34.202 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:36.734 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:36.734 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:36.734 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:19:36.734 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:36.734 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:36.734 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:36.734 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.734 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:19:36.734 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:36.734 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:36.734 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:36.734 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:36.734 08:57:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:38.633 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:38.633 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:38.633 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:19:38.633 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:38.633 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:38.633 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:38.633 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:38.633 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:19:38.633 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:38.633 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:38.633 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:38.633 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:38.633 08:57:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:40.534 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:40.534 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:40.534 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:19:40.534 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:40.534 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:40.534 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:40.534 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:40.534 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:19:40.793 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:40.793 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:19:40.793 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:19:40.793 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:19:40.793 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:19:42.695 08:57:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:19:42.695 08:57:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:19:42.695 08:57:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:19:42.695 08:57:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:19:42.695 08:57:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:19:42.695 08:57:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:19:42.695 08:57:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:42.695 [global] 00:19:42.695 thread=1 00:19:42.695 invalidate=1 00:19:42.695 rw=read 00:19:42.695 time_based=1 00:19:42.695 runtime=10 00:19:42.695 ioengine=libaio 00:19:42.695 direct=1 00:19:42.695 bs=262144 00:19:42.695 iodepth=64 00:19:42.695 norandommap=1 00:19:42.695 numjobs=1 00:19:42.695 00:19:42.955 [job0] 00:19:42.955 filename=/dev/nvme0n1 00:19:42.955 [job1] 00:19:42.955 filename=/dev/nvme10n1 00:19:42.955 [job2] 00:19:42.955 filename=/dev/nvme1n1 00:19:42.955 [job3] 00:19:42.955 filename=/dev/nvme2n1 00:19:42.955 [job4] 00:19:42.955 filename=/dev/nvme3n1 00:19:42.955 [job5] 00:19:42.955 filename=/dev/nvme4n1 00:19:42.955 [job6] 00:19:42.955 filename=/dev/nvme5n1 00:19:42.955 [job7] 00:19:42.955 filename=/dev/nvme6n1 00:19:42.955 [job8] 00:19:42.955 filename=/dev/nvme7n1 00:19:42.955 [job9] 00:19:42.955 filename=/dev/nvme8n1 00:19:42.955 [job10] 00:19:42.955 filename=/dev/nvme9n1 00:19:42.955 Could not set queue depth (nvme0n1) 00:19:42.955 Could not set queue depth (nvme10n1) 00:19:42.955 Could not set queue depth (nvme1n1) 00:19:42.955 Could not set queue depth (nvme2n1) 00:19:42.955 Could not set queue depth (nvme3n1) 00:19:42.955 Could not set queue depth (nvme4n1) 00:19:42.955 Could not set queue depth (nvme5n1) 00:19:42.955 Could not set queue depth (nvme6n1) 00:19:42.955 Could not set queue depth (nvme7n1) 00:19:42.955 Could not set queue depth (nvme8n1) 00:19:42.955 Could not set queue depth (nvme9n1) 00:19:43.214 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:43.214 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:43.214 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:43.214 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:43.214 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:43.214 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:43.214 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:43.214 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:43.214 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:43.214 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:43.214 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:43.214 fio-3.35 00:19:43.214 Starting 11 threads 00:19:55.423 00:19:55.423 job0: (groupid=0, jobs=1): err= 0: pid=77702: Sat Sep 28 08:57:31 2024 00:19:55.423 read: IOPS=155, BW=38.8MiB/s (40.7MB/s)(394MiB/10144msec) 00:19:55.423 slat (usec): min=20, max=420564, avg=6367.56, stdev=19094.78 00:19:55.423 clat (msec): min=19, max=742, avg=405.28, stdev=91.24 00:19:55.423 lat (msec): min=24, max=964, avg=411.65, stdev=92.03 00:19:55.423 clat percentiles (msec): 00:19:55.423 | 1.00th=[ 56], 5.00th=[ 313], 10.00th=[ 338], 20.00th=[ 363], 00:19:55.423 | 30.00th=[ 376], 40.00th=[ 388], 50.00th=[ 397], 60.00th=[ 409], 00:19:55.423 | 70.00th=[ 422], 80.00th=[ 439], 90.00th=[ 472], 95.00th=[ 584], 00:19:55.423 | 99.00th=[ 718], 99.50th=[ 718], 99.90th=[ 743], 99.95th=[ 743], 00:19:55.423 | 99.99th=[ 743] 00:19:55.423 bw ( KiB/s): min=14877, max=45056, per=5.83%, avg=38645.15, stdev=6900.41, samples=20 00:19:55.423 iops : min= 58, max= 176, avg=150.85, stdev=26.94, samples=20 00:19:55.423 lat (msec) : 20=0.06%, 100=1.40%, 250=1.14%, 500=89.20%, 750=8.20% 00:19:55.423 cpu : usr=0.06%, sys=0.76%, ctx=314, majf=0, minf=4097 00:19:55.423 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:19:55.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.423 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:55.423 issued rwts: total=1574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.423 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:55.423 job1: (groupid=0, jobs=1): err= 0: pid=77704: Sat Sep 28 08:57:31 2024 00:19:55.423 read: IOPS=188, BW=47.2MiB/s (49.5MB/s)(479MiB/10144msec) 00:19:55.423 slat (usec): min=22, max=79676, avg=5230.01, stdev=12782.25 00:19:55.423 clat (msec): min=17, max=489, avg=333.36, stdev=96.89 00:19:55.423 lat (msec): min=18, max=498, avg=338.59, stdev=98.44 00:19:55.423 clat percentiles (msec): 00:19:55.423 | 1.00th=[ 40], 5.00th=[ 136], 10.00th=[ 163], 20.00th=[ 224], 00:19:55.423 | 30.00th=[ 347], 40.00th=[ 363], 50.00th=[ 372], 60.00th=[ 380], 00:19:55.423 | 70.00th=[ 393], 80.00th=[ 401], 90.00th=[ 418], 95.00th=[ 426], 00:19:55.423 | 99.00th=[ 447], 99.50th=[ 456], 99.90th=[ 489], 99.95th=[ 489], 00:19:55.423 | 99.99th=[ 489] 00:19:55.423 bw ( KiB/s): min=39345, max=93370, per=7.14%, avg=47377.60, stdev=15231.37, samples=20 00:19:55.423 iops : min= 153, max= 364, avg=184.90, stdev=59.42, samples=20 00:19:55.423 lat (msec) : 20=0.10%, 50=1.20%, 100=0.10%, 250=20.63%, 500=77.96% 00:19:55.423 cpu : usr=0.12%, sys=0.87%, ctx=369, majf=0, minf=4097 00:19:55.423 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:19:55.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.423 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:55.423 issued rwts: total=1915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.423 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:55.423 job2: (groupid=0, jobs=1): err= 0: pid=77705: Sat Sep 28 08:57:31 2024 00:19:55.423 read: IOPS=101, BW=25.3MiB/s (26.5MB/s)(257MiB/10172msec) 00:19:55.423 slat (usec): min=13, max=423442, avg=9728.18, stdev=30983.05 00:19:55.423 clat (msec): min=30, max=891, avg=622.34, stdev=134.78 00:19:55.423 lat (msec): min=32, max=891, avg=632.07, stdev=133.48 00:19:55.423 clat percentiles (msec): 00:19:55.423 | 1.00th=[ 190], 5.00th=[ 443], 10.00th=[ 477], 20.00th=[ 502], 00:19:55.423 | 30.00th=[ 535], 40.00th=[ 575], 50.00th=[ 634], 60.00th=[ 684], 00:19:55.423 | 70.00th=[ 718], 80.00th=[ 760], 90.00th=[ 776], 95.00th=[ 802], 00:19:55.423 | 99.00th=[ 835], 99.50th=[ 860], 99.90th=[ 885], 99.95th=[ 894], 00:19:55.423 | 99.99th=[ 894] 00:19:55.423 bw ( KiB/s): min= 7168, max=32768, per=3.72%, avg=24695.60, stdev=7367.19, samples=20 00:19:55.423 iops : min= 28, max= 128, avg=96.35, stdev=28.73, samples=20 00:19:55.423 lat (msec) : 50=0.78%, 250=0.39%, 500=19.55%, 750=58.27%, 1000=21.01% 00:19:55.423 cpu : usr=0.01%, sys=0.44%, ctx=185, majf=0, minf=4097 00:19:55.423 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:19:55.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.423 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:55.423 issued rwts: total=1028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.423 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:55.423 job3: (groupid=0, jobs=1): err= 0: pid=77706: Sat Sep 28 08:57:31 2024 00:19:55.423 read: IOPS=110, BW=27.6MiB/s (29.0MB/s)(281MiB/10175msec) 00:19:55.423 slat (usec): min=21, max=282485, avg=8936.91, stdev=25836.66 00:19:55.423 clat (msec): min=49, max=802, avg=569.49, stdev=120.56 00:19:55.423 lat (msec): min=49, max=806, avg=578.42, stdev=121.44 00:19:55.423 clat percentiles (msec): 00:19:55.423 | 1.00th=[ 53], 5.00th=[ 393], 10.00th=[ 439], 20.00th=[ 493], 00:19:55.423 | 30.00th=[ 535], 40.00th=[ 567], 50.00th=[ 584], 60.00th=[ 609], 00:19:55.423 | 70.00th=[ 634], 80.00th=[ 667], 90.00th=[ 701], 95.00th=[ 726], 00:19:55.423 | 99.00th=[ 760], 99.50th=[ 768], 99.90th=[ 768], 99.95th=[ 802], 00:19:55.423 | 99.99th=[ 802] 00:19:55.423 bw ( KiB/s): min=16896, max=37888, per=4.09%, avg=27131.20, stdev=5129.61, samples=20 00:19:55.423 iops : min= 66, max= 148, avg=105.85, stdev=20.07, samples=20 00:19:55.424 lat (msec) : 50=0.53%, 100=1.07%, 250=0.98%, 500=19.04%, 750=75.71% 00:19:55.424 lat (msec) : 1000=2.67% 00:19:55.424 cpu : usr=0.03%, sys=0.55%, ctx=201, majf=0, minf=4097 00:19:55.424 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:19:55.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.424 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:55.424 issued rwts: total=1124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:55.424 job4: (groupid=0, jobs=1): err= 0: pid=77707: Sat Sep 28 08:57:31 2024 00:19:55.424 read: IOPS=188, BW=47.2MiB/s (49.5MB/s)(478MiB/10141msec) 00:19:55.424 slat (usec): min=21, max=83427, avg=5232.29, stdev=12647.98 00:19:55.424 clat (msec): min=46, max=499, avg=333.56, stdev=95.15 00:19:55.424 lat (msec): min=46, max=521, avg=338.79, stdev=96.58 00:19:55.424 clat percentiles (msec): 00:19:55.424 | 1.00th=[ 49], 5.00th=[ 142], 10.00th=[ 174], 20.00th=[ 222], 00:19:55.424 | 30.00th=[ 338], 40.00th=[ 359], 50.00th=[ 372], 60.00th=[ 380], 00:19:55.424 | 70.00th=[ 388], 80.00th=[ 401], 90.00th=[ 414], 95.00th=[ 426], 00:19:55.424 | 99.00th=[ 447], 99.50th=[ 472], 99.90th=[ 502], 99.95th=[ 502], 00:19:55.424 | 99.99th=[ 502] 00:19:55.424 bw ( KiB/s): min=37888, max=93508, per=7.14%, avg=47334.10, stdev=15419.25, samples=20 00:19:55.424 iops : min= 148, max= 365, avg=184.75, stdev=60.12, samples=20 00:19:55.424 lat (msec) : 50=1.25%, 100=0.99%, 250=18.71%, 500=79.04% 00:19:55.424 cpu : usr=0.08%, sys=0.90%, ctx=370, majf=0, minf=4097 00:19:55.424 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:19:55.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.424 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:55.424 issued rwts: total=1913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:55.424 job5: (groupid=0, jobs=1): err= 0: pid=77708: Sat Sep 28 08:57:31 2024 00:19:55.424 read: IOPS=157, BW=39.5MiB/s (41.4MB/s)(401MiB/10143msec) 00:19:55.424 slat (usec): min=20, max=366358, avg=6243.45, stdev=18386.83 00:19:55.424 clat (msec): min=12, max=737, avg=398.38, stdev=90.41 00:19:55.424 lat (msec): min=16, max=887, avg=404.63, stdev=91.60 00:19:55.424 clat percentiles (msec): 00:19:55.424 | 1.00th=[ 111], 5.00th=[ 268], 10.00th=[ 330], 20.00th=[ 355], 00:19:55.424 | 30.00th=[ 372], 40.00th=[ 384], 50.00th=[ 393], 60.00th=[ 401], 00:19:55.424 | 70.00th=[ 418], 80.00th=[ 435], 90.00th=[ 468], 95.00th=[ 600], 00:19:55.424 | 99.00th=[ 693], 99.50th=[ 701], 99.90th=[ 735], 99.95th=[ 735], 00:19:55.424 | 99.99th=[ 735] 00:19:55.424 bw ( KiB/s): min=15360, max=45568, per=5.94%, avg=39363.25, stdev=6778.17, samples=20 00:19:55.424 iops : min= 60, max= 178, avg=153.65, stdev=26.44, samples=20 00:19:55.424 lat (msec) : 20=0.12%, 250=4.74%, 500=86.70%, 750=8.43% 00:19:55.424 cpu : usr=0.11%, sys=0.72%, ctx=300, majf=0, minf=4098 00:19:55.424 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:19:55.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.424 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:55.424 issued rwts: total=1602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:55.424 job6: (groupid=0, jobs=1): err= 0: pid=77709: Sat Sep 28 08:57:31 2024 00:19:55.424 read: IOPS=163, BW=40.8MiB/s (42.8MB/s)(415MiB/10147msec) 00:19:55.424 slat (usec): min=20, max=390560, avg=5727.49, stdev=16247.05 00:19:55.424 clat (msec): min=130, max=692, avg=385.42, stdev=82.10 00:19:55.424 lat (msec): min=148, max=770, avg=391.15, stdev=82.93 00:19:55.424 clat percentiles (msec): 00:19:55.424 | 1.00th=[ 163], 5.00th=[ 239], 10.00th=[ 309], 20.00th=[ 342], 00:19:55.424 | 30.00th=[ 359], 40.00th=[ 376], 50.00th=[ 384], 60.00th=[ 397], 00:19:55.424 | 70.00th=[ 409], 80.00th=[ 430], 90.00th=[ 456], 95.00th=[ 489], 00:19:55.424 | 99.00th=[ 684], 99.50th=[ 684], 99.90th=[ 693], 99.95th=[ 693], 00:19:55.424 | 99.99th=[ 693] 00:19:55.424 bw ( KiB/s): min=31744, max=47104, per=6.15%, avg=40793.45, stdev=4105.39, samples=20 00:19:55.424 iops : min= 124, max= 184, avg=159.25, stdev=15.98, samples=20 00:19:55.424 lat (msec) : 250=5.61%, 500=89.69%, 750=4.70% 00:19:55.424 cpu : usr=0.09%, sys=0.80%, ctx=355, majf=0, minf=4097 00:19:55.424 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:19:55.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.424 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:55.424 issued rwts: total=1658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:55.424 job7: (groupid=0, jobs=1): err= 0: pid=77710: Sat Sep 28 08:57:31 2024 00:19:55.424 read: IOPS=1167, BW=292MiB/s (306MB/s)(2926MiB/10029msec) 00:19:55.424 slat (usec): min=19, max=16007, avg=850.74, stdev=2553.76 00:19:55.424 clat (usec): min=15425, max=77497, avg=53927.67, stdev=7569.27 00:19:55.424 lat (usec): min=16159, max=77540, avg=54778.42, stdev=7190.30 00:19:55.424 clat percentiles (usec): 00:19:55.424 | 1.00th=[36439], 5.00th=[40109], 10.00th=[45351], 20.00th=[47449], 00:19:55.424 | 30.00th=[49546], 40.00th=[51643], 50.00th=[53216], 60.00th=[57410], 00:19:55.424 | 70.00th=[59507], 80.00th=[61604], 90.00th=[63177], 95.00th=[64226], 00:19:55.424 | 99.00th=[66323], 99.50th=[66847], 99.90th=[77071], 99.95th=[77071], 00:19:55.424 | 99.99th=[77071] 00:19:55.424 bw ( KiB/s): min=286208, max=302685, per=44.93%, avg=298004.30, stdev=5355.21, samples=20 00:19:55.424 iops : min= 1118, max= 1182, avg=1163.95, stdev=20.86, samples=20 00:19:55.424 lat (msec) : 20=0.03%, 50=32.45%, 100=67.53% 00:19:55.424 cpu : usr=0.50%, sys=3.59%, ctx=1402, majf=0, minf=4097 00:19:55.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:55.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:55.424 issued rwts: total=11705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:55.424 job8: (groupid=0, jobs=1): err= 0: pid=77711: Sat Sep 28 08:57:31 2024 00:19:55.424 read: IOPS=108, BW=27.2MiB/s (28.5MB/s)(277MiB/10173msec) 00:19:55.424 slat (usec): min=20, max=249217, avg=8603.64, stdev=24906.79 00:19:55.424 clat (msec): min=145, max=792, avg=579.24, stdev=122.31 00:19:55.424 lat (msec): min=164, max=824, avg=587.84, stdev=123.25 00:19:55.424 clat percentiles (msec): 00:19:55.424 | 1.00th=[ 167], 5.00th=[ 330], 10.00th=[ 447], 20.00th=[ 510], 00:19:55.424 | 30.00th=[ 535], 40.00th=[ 567], 50.00th=[ 592], 60.00th=[ 625], 00:19:55.424 | 70.00th=[ 651], 80.00th=[ 684], 90.00th=[ 718], 95.00th=[ 743], 00:19:55.424 | 99.00th=[ 776], 99.50th=[ 776], 99.90th=[ 793], 99.95th=[ 793], 00:19:55.424 | 99.99th=[ 793] 00:19:55.424 bw ( KiB/s): min=10240, max=33280, per=4.02%, avg=26663.35, stdev=6098.19, samples=20 00:19:55.424 iops : min= 40, max= 130, avg=104.00, stdev=23.76, samples=20 00:19:55.424 lat (msec) : 250=2.71%, 500=15.64%, 750=78.48%, 1000=3.16% 00:19:55.424 cpu : usr=0.07%, sys=0.51%, ctx=207, majf=0, minf=4097 00:19:55.424 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:19:55.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.424 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:55.424 issued rwts: total=1106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:55.424 job9: (groupid=0, jobs=1): err= 0: pid=77712: Sat Sep 28 08:57:31 2024 00:19:55.424 read: IOPS=113, BW=28.2MiB/s (29.6MB/s)(287MiB/10168msec) 00:19:55.424 slat (usec): min=32, max=196684, avg=8717.81, stdev=23343.39 00:19:55.424 clat (msec): min=93, max=772, avg=557.00, stdev=122.05 00:19:55.424 lat (msec): min=97, max=805, avg=565.71, stdev=123.21 00:19:55.424 clat percentiles (msec): 00:19:55.424 | 1.00th=[ 100], 5.00th=[ 305], 10.00th=[ 426], 20.00th=[ 498], 00:19:55.424 | 30.00th=[ 527], 40.00th=[ 542], 50.00th=[ 575], 60.00th=[ 600], 00:19:55.424 | 70.00th=[ 617], 80.00th=[ 651], 90.00th=[ 684], 95.00th=[ 709], 00:19:55.424 | 99.00th=[ 735], 99.50th=[ 751], 99.90th=[ 776], 99.95th=[ 776], 00:19:55.424 | 99.99th=[ 776] 00:19:55.424 bw ( KiB/s): min=17408, max=37888, per=4.19%, avg=27775.00, stdev=5341.26, samples=20 00:19:55.424 iops : min= 68, max= 148, avg=108.35, stdev=20.75, samples=20 00:19:55.424 lat (msec) : 100=1.48%, 250=2.52%, 500=16.45%, 750=79.03%, 1000=0.52% 00:19:55.424 cpu : usr=0.06%, sys=0.57%, ctx=206, majf=0, minf=4097 00:19:55.424 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:19:55.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.424 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:55.424 issued rwts: total=1149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:55.424 job10: (groupid=0, jobs=1): err= 0: pid=77713: Sat Sep 28 08:57:31 2024 00:19:55.424 read: IOPS=156, BW=39.1MiB/s (41.0MB/s)(397MiB/10146msec) 00:19:55.424 slat (usec): min=21, max=268133, avg=6330.07, stdev=17573.20 00:19:55.424 clat (msec): min=19, max=752, avg=402.36, stdev=81.48 00:19:55.424 lat (msec): min=23, max=800, avg=408.69, stdev=82.18 00:19:55.424 clat percentiles (msec): 00:19:55.424 | 1.00th=[ 161], 5.00th=[ 309], 10.00th=[ 342], 20.00th=[ 363], 00:19:55.424 | 30.00th=[ 380], 40.00th=[ 388], 50.00th=[ 393], 60.00th=[ 401], 00:19:55.424 | 70.00th=[ 414], 80.00th=[ 430], 90.00th=[ 477], 95.00th=[ 523], 00:19:55.424 | 99.00th=[ 726], 99.50th=[ 735], 99.90th=[ 751], 99.95th=[ 751], 00:19:55.424 | 99.99th=[ 751] 00:19:55.424 bw ( KiB/s): min=17408, max=44544, per=5.87%, avg=38950.80, stdev=6274.34, samples=20 00:19:55.424 iops : min= 68, max= 174, avg=152.05, stdev=24.47, samples=20 00:19:55.424 lat (msec) : 20=0.06%, 50=0.63%, 100=0.06%, 250=1.51%, 500=90.35% 00:19:55.424 lat (msec) : 750=7.19%, 1000=0.19% 00:19:55.424 cpu : usr=0.07%, sys=0.79%, ctx=330, majf=0, minf=4097 00:19:55.424 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:19:55.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.424 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:55.424 issued rwts: total=1586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.424 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:55.424 00:19:55.424 Run status group 0 (all jobs): 00:19:55.424 READ: bw=648MiB/s (679MB/s), 25.3MiB/s-292MiB/s (26.5MB/s-306MB/s), io=6590MiB (6910MB), run=10029-10175msec 00:19:55.424 00:19:55.424 Disk stats (read/write): 00:19:55.424 nvme0n1: ios=3021/0, merge=0/0, ticks=1221185/0, in_queue=1221185, util=97.86% 00:19:55.424 nvme10n1: ios=3702/0, merge=0/0, ticks=1224872/0, in_queue=1224872, util=97.92% 00:19:55.425 nvme1n1: ios=1936/0, merge=0/0, ticks=1201819/0, in_queue=1201819, util=98.14% 00:19:55.425 nvme2n1: ios=2122/0, merge=0/0, ticks=1211295/0, in_queue=1211295, util=98.26% 00:19:55.425 nvme3n1: ios=3703/0, merge=0/0, ticks=1224393/0, in_queue=1224393, util=98.19% 00:19:55.425 nvme4n1: ios=3089/0, merge=0/0, ticks=1222963/0, in_queue=1222963, util=98.51% 00:19:55.425 nvme5n1: ios=3202/0, merge=0/0, ticks=1226656/0, in_queue=1226656, util=98.61% 00:19:55.425 nvme6n1: ios=23290/0, merge=0/0, ticks=1239238/0, in_queue=1239238, util=98.67% 00:19:55.425 nvme7n1: ios=2085/0, merge=0/0, ticks=1214445/0, in_queue=1214445, util=98.89% 00:19:55.425 nvme8n1: ios=2171/0, merge=0/0, ticks=1214808/0, in_queue=1214808, util=98.99% 00:19:55.425 nvme9n1: ios=3047/0, merge=0/0, ticks=1224283/0, in_queue=1224283, util=99.20% 00:19:55.425 08:57:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:55.425 [global] 00:19:55.425 thread=1 00:19:55.425 invalidate=1 00:19:55.425 rw=randwrite 00:19:55.425 time_based=1 00:19:55.425 runtime=10 00:19:55.425 ioengine=libaio 00:19:55.425 direct=1 00:19:55.425 bs=262144 00:19:55.425 iodepth=64 00:19:55.425 norandommap=1 00:19:55.425 numjobs=1 00:19:55.425 00:19:55.425 [job0] 00:19:55.425 filename=/dev/nvme0n1 00:19:55.425 [job1] 00:19:55.425 filename=/dev/nvme10n1 00:19:55.425 [job2] 00:19:55.425 filename=/dev/nvme1n1 00:19:55.425 [job3] 00:19:55.425 filename=/dev/nvme2n1 00:19:55.425 [job4] 00:19:55.425 filename=/dev/nvme3n1 00:19:55.425 [job5] 00:19:55.425 filename=/dev/nvme4n1 00:19:55.425 [job6] 00:19:55.425 filename=/dev/nvme5n1 00:19:55.425 [job7] 00:19:55.425 filename=/dev/nvme6n1 00:19:55.425 [job8] 00:19:55.425 filename=/dev/nvme7n1 00:19:55.425 [job9] 00:19:55.425 filename=/dev/nvme8n1 00:19:55.425 [job10] 00:19:55.425 filename=/dev/nvme9n1 00:19:55.425 Could not set queue depth (nvme0n1) 00:19:55.425 Could not set queue depth (nvme10n1) 00:19:55.425 Could not set queue depth (nvme1n1) 00:19:55.425 Could not set queue depth (nvme2n1) 00:19:55.425 Could not set queue depth (nvme3n1) 00:19:55.425 Could not set queue depth (nvme4n1) 00:19:55.425 Could not set queue depth (nvme5n1) 00:19:55.425 Could not set queue depth (nvme6n1) 00:19:55.425 Could not set queue depth (nvme7n1) 00:19:55.425 Could not set queue depth (nvme8n1) 00:19:55.425 Could not set queue depth (nvme9n1) 00:19:55.425 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:55.425 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:55.425 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:55.425 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:55.425 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:55.425 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:55.425 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:55.425 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:55.425 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:55.425 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:55.425 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:55.425 fio-3.35 00:19:55.425 Starting 11 threads 00:20:05.449 00:20:05.449 job0: (groupid=0, jobs=1): err= 0: pid=77909: Sat Sep 28 08:57:42 2024 00:20:05.449 write: IOPS=548, BW=137MiB/s (144MB/s)(1385MiB/10104msec); 0 zone resets 00:20:05.449 slat (usec): min=17, max=14326, avg=1799.59, stdev=3070.87 00:20:05.449 clat (msec): min=20, max=212, avg=114.85, stdev= 8.38 00:20:05.449 lat (msec): min=20, max=212, avg=116.65, stdev= 7.92 00:20:05.449 clat percentiles (msec): 00:20:05.449 | 1.00th=[ 105], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 111], 00:20:05.449 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 116], 60.00th=[ 117], 00:20:05.449 | 70.00th=[ 117], 80.00th=[ 118], 90.00th=[ 118], 95.00th=[ 120], 00:20:05.449 | 99.00th=[ 128], 99.50th=[ 161], 99.90th=[ 205], 99.95th=[ 205], 00:20:05.449 | 99.99th=[ 213] 00:20:05.449 bw ( KiB/s): min=129024, max=147456, per=15.91%, avg=140236.80, stdev=3765.72, samples=20 00:20:05.449 iops : min= 504, max= 576, avg=547.80, stdev=14.71, samples=20 00:20:05.449 lat (msec) : 50=0.22%, 100=0.58%, 250=99.21% 00:20:05.449 cpu : usr=0.94%, sys=1.67%, ctx=4943, majf=0, minf=1 00:20:05.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:20:05.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:05.449 issued rwts: total=0,5541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.449 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:05.449 job1: (groupid=0, jobs=1): err= 0: pid=77910: Sat Sep 28 08:57:42 2024 00:20:05.449 write: IOPS=351, BW=87.8MiB/s (92.1MB/s)(891MiB/10146msec); 0 zone resets 00:20:05.449 slat (usec): min=18, max=91140, avg=2800.27, stdev=5125.63 00:20:05.449 clat (msec): min=15, max=322, avg=179.28, stdev=19.83 00:20:05.449 lat (msec): min=15, max=322, avg=182.08, stdev=19.55 00:20:05.449 clat percentiles (msec): 00:20:05.449 | 1.00th=[ 80], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:20:05.449 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 180], 60.00th=[ 182], 00:20:05.449 | 70.00th=[ 184], 80.00th=[ 186], 90.00th=[ 190], 95.00th=[ 197], 00:20:05.449 | 99.00th=[ 230], 99.50th=[ 275], 99.90th=[ 313], 99.95th=[ 321], 00:20:05.449 | 99.99th=[ 321] 00:20:05.449 bw ( KiB/s): min=83968, max=94208, per=10.16%, avg=89623.70, stdev=2520.26, samples=20 00:20:05.449 iops : min= 328, max= 368, avg=350.05, stdev= 9.81, samples=20 00:20:05.449 lat (msec) : 20=0.11%, 50=0.48%, 100=0.67%, 250=97.92%, 500=0.81% 00:20:05.449 cpu : usr=0.64%, sys=1.13%, ctx=1629, majf=0, minf=1 00:20:05.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:20:05.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:05.449 issued rwts: total=0,3565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.449 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:05.449 job2: (groupid=0, jobs=1): err= 0: pid=77920: Sat Sep 28 08:57:42 2024 00:20:05.449 write: IOPS=419, BW=105MiB/s (110MB/s)(1061MiB/10115msec); 0 zone resets 00:20:05.449 slat (usec): min=15, max=68535, avg=2351.32, stdev=4136.34 00:20:05.449 clat (msec): min=70, max=262, avg=150.20, stdev=10.11 00:20:05.449 lat (msec): min=70, max=262, avg=152.55, stdev= 9.40 00:20:05.449 clat percentiles (msec): 00:20:05.449 | 1.00th=[ 138], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 144], 00:20:05.449 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 153], 60.00th=[ 153], 00:20:05.449 | 70.00th=[ 153], 80.00th=[ 155], 90.00th=[ 157], 95.00th=[ 157], 00:20:05.449 | 99.00th=[ 188], 99.50th=[ 215], 99.90th=[ 255], 99.95th=[ 255], 00:20:05.449 | 99.99th=[ 264] 00:20:05.450 bw ( KiB/s): min=94396, max=111104, per=12.13%, avg=106991.80, stdev=3466.10, samples=20 00:20:05.450 iops : min= 368, max= 434, avg=417.90, stdev=13.68, samples=20 00:20:05.450 lat (msec) : 100=0.38%, 250=99.48%, 500=0.14% 00:20:05.450 cpu : usr=0.79%, sys=1.29%, ctx=5328, majf=0, minf=1 00:20:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:20:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:05.450 issued rwts: total=0,4242,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:05.450 job3: (groupid=0, jobs=1): err= 0: pid=77923: Sat Sep 28 08:57:42 2024 00:20:05.450 write: IOPS=422, BW=106MiB/s (111MB/s)(1068MiB/10119msec); 0 zone resets 00:20:05.450 slat (usec): min=16, max=13622, avg=2336.13, stdev=4017.16 00:20:05.450 clat (msec): min=15, max=265, avg=149.21, stdev=13.30 00:20:05.450 lat (msec): min=15, max=265, avg=151.54, stdev=12.90 00:20:05.450 clat percentiles (msec): 00:20:05.450 | 1.00th=[ 95], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 144], 00:20:05.450 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 150], 60.00th=[ 153], 00:20:05.450 | 70.00th=[ 153], 80.00th=[ 155], 90.00th=[ 155], 95.00th=[ 157], 00:20:05.450 | 99.00th=[ 159], 99.50th=[ 218], 99.90th=[ 257], 99.95th=[ 257], 00:20:05.450 | 99.99th=[ 266] 00:20:05.450 bw ( KiB/s): min=104960, max=110592, per=12.22%, avg=107750.40, stdev=1606.04, samples=20 00:20:05.450 iops : min= 410, max= 432, avg=420.90, stdev= 6.27, samples=20 00:20:05.450 lat (msec) : 20=0.07%, 50=0.37%, 100=0.56%, 250=98.85%, 500=0.14% 00:20:05.450 cpu : usr=0.68%, sys=1.20%, ctx=4549, majf=0, minf=1 00:20:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:20:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:05.450 issued rwts: total=0,4272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:05.450 job4: (groupid=0, jobs=1): err= 0: pid=77924: Sat Sep 28 08:57:42 2024 00:20:05.450 write: IOPS=168, BW=42.1MiB/s (44.2MB/s)(431MiB/10239msec); 0 zone resets 00:20:05.450 slat (usec): min=18, max=77054, avg=5794.83, stdev=10302.13 00:20:05.450 clat (msec): min=26, max=617, avg=373.87, stdev=47.21 00:20:05.450 lat (msec): min=26, max=618, avg=379.67, stdev=47.01 00:20:05.450 clat percentiles (msec): 00:20:05.450 | 1.00th=[ 129], 5.00th=[ 347], 10.00th=[ 359], 20.00th=[ 363], 00:20:05.450 | 30.00th=[ 372], 40.00th=[ 380], 50.00th=[ 384], 60.00th=[ 384], 00:20:05.450 | 70.00th=[ 388], 80.00th=[ 388], 90.00th=[ 393], 95.00th=[ 393], 00:20:05.450 | 99.00th=[ 518], 99.50th=[ 567], 99.90th=[ 617], 99.95th=[ 617], 00:20:05.450 | 99.99th=[ 617] 00:20:05.450 bw ( KiB/s): min=40960, max=45056, per=4.83%, avg=42547.20, stdev=1253.04, samples=20 00:20:05.450 iops : min= 160, max= 176, avg=166.20, stdev= 4.89, samples=20 00:20:05.450 lat (msec) : 50=0.29%, 100=0.46%, 250=1.91%, 500=96.29%, 750=1.04% 00:20:05.450 cpu : usr=0.33%, sys=0.52%, ctx=1238, majf=0, minf=1 00:20:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:20:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.450 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:05.450 issued rwts: total=0,1725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:05.450 job5: (groupid=0, jobs=1): err= 0: pid=77925: Sat Sep 28 08:57:42 2024 00:20:05.450 write: IOPS=169, BW=42.3MiB/s (44.4MB/s)(433MiB/10237msec); 0 zone resets 00:20:05.450 slat (usec): min=19, max=51673, avg=5771.84, stdev=10229.25 00:20:05.450 clat (msec): min=48, max=615, avg=372.32, stdev=46.97 00:20:05.450 lat (msec): min=48, max=615, avg=378.09, stdev=46.75 00:20:05.450 clat percentiles (msec): 00:20:05.450 | 1.00th=[ 127], 5.00th=[ 326], 10.00th=[ 355], 20.00th=[ 363], 00:20:05.450 | 30.00th=[ 372], 40.00th=[ 380], 50.00th=[ 380], 60.00th=[ 384], 00:20:05.450 | 70.00th=[ 388], 80.00th=[ 388], 90.00th=[ 393], 95.00th=[ 393], 00:20:05.450 | 99.00th=[ 518], 99.50th=[ 567], 99.90th=[ 617], 99.95th=[ 617], 00:20:05.450 | 99.99th=[ 617] 00:20:05.450 bw ( KiB/s): min=40960, max=45146, per=4.84%, avg=42705.30, stdev=1211.88, samples=20 00:20:05.450 iops : min= 160, max= 176, avg=166.80, stdev= 4.70, samples=20 00:20:05.450 lat (msec) : 50=0.23%, 100=0.46%, 250=1.91%, 500=96.36%, 750=1.04% 00:20:05.450 cpu : usr=0.27%, sys=0.60%, ctx=1881, majf=0, minf=1 00:20:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:20:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.450 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:05.450 issued rwts: total=0,1732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:05.450 job6: (groupid=0, jobs=1): err= 0: pid=77926: Sat Sep 28 08:57:42 2024 00:20:05.450 write: IOPS=167, BW=41.9MiB/s (43.9MB/s)(429MiB/10240msec); 0 zone resets 00:20:05.450 slat (usec): min=16, max=155149, avg=5827.22, stdev=10761.28 00:20:05.450 clat (msec): min=156, max=617, avg=375.90, stdev=34.74 00:20:05.450 lat (msec): min=156, max=617, avg=381.73, stdev=33.83 00:20:05.450 clat percentiles (msec): 00:20:05.450 | 1.00th=[ 236], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 363], 00:20:05.450 | 30.00th=[ 372], 40.00th=[ 380], 50.00th=[ 380], 60.00th=[ 384], 00:20:05.450 | 70.00th=[ 388], 80.00th=[ 388], 90.00th=[ 393], 95.00th=[ 393], 00:20:05.450 | 99.00th=[ 518], 99.50th=[ 567], 99.90th=[ 617], 99.95th=[ 617], 00:20:05.450 | 99.99th=[ 617] 00:20:05.450 bw ( KiB/s): min=36937, max=45056, per=4.80%, avg=42294.85, stdev=1651.97, samples=20 00:20:05.450 iops : min= 144, max= 176, avg=165.20, stdev= 6.50, samples=20 00:20:05.450 lat (msec) : 250=1.22%, 500=97.73%, 750=1.05% 00:20:05.450 cpu : usr=0.29%, sys=0.47%, ctx=2030, majf=0, minf=1 00:20:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:20:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.450 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:05.450 issued rwts: total=0,1716,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:05.450 job7: (groupid=0, jobs=1): err= 0: pid=77927: Sat Sep 28 08:57:42 2024 00:20:05.450 write: IOPS=165, BW=41.4MiB/s (43.4MB/s)(424MiB/10235msec); 0 zone resets 00:20:05.450 slat (usec): min=17, max=226055, avg=5902.27, stdev=11553.34 00:20:05.450 clat (msec): min=212, max=600, avg=380.13, stdev=30.83 00:20:05.450 lat (msec): min=227, max=600, avg=386.03, stdev=29.41 00:20:05.450 clat percentiles (msec): 00:20:05.450 | 1.00th=[ 262], 5.00th=[ 355], 10.00th=[ 359], 20.00th=[ 368], 00:20:05.450 | 30.00th=[ 376], 40.00th=[ 380], 50.00th=[ 384], 60.00th=[ 388], 00:20:05.450 | 70.00th=[ 388], 80.00th=[ 388], 90.00th=[ 393], 95.00th=[ 405], 00:20:05.450 | 99.00th=[ 527], 99.50th=[ 550], 99.90th=[ 600], 99.95th=[ 600], 00:20:05.450 | 99.99th=[ 600] 00:20:05.450 bw ( KiB/s): min=28672, max=45056, per=4.74%, avg=41779.20, stdev=3282.19, samples=20 00:20:05.450 iops : min= 112, max= 176, avg=163.20, stdev=12.82, samples=20 00:20:05.450 lat (msec) : 250=0.77%, 500=97.76%, 750=1.47% 00:20:05.450 cpu : usr=0.27%, sys=0.56%, ctx=2175, majf=0, minf=1 00:20:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:20:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.450 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:05.450 issued rwts: total=0,1696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:05.450 job8: (groupid=0, jobs=1): err= 0: pid=77932: Sat Sep 28 08:57:42 2024 00:20:05.450 write: IOPS=548, BW=137MiB/s (144MB/s)(1385MiB/10103msec); 0 zone resets 00:20:05.450 slat (usec): min=17, max=15527, avg=1798.89, stdev=3062.43 00:20:05.450 clat (msec): min=20, max=214, avg=114.86, stdev= 8.51 00:20:05.450 lat (msec): min=20, max=214, avg=116.66, stdev= 8.06 00:20:05.450 clat percentiles (msec): 00:20:05.450 | 1.00th=[ 105], 5.00th=[ 108], 10.00th=[ 110], 20.00th=[ 111], 00:20:05.450 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 116], 60.00th=[ 117], 00:20:05.450 | 70.00th=[ 117], 80.00th=[ 118], 90.00th=[ 120], 95.00th=[ 120], 00:20:05.450 | 99.00th=[ 129], 99.50th=[ 163], 99.90th=[ 207], 99.95th=[ 207], 00:20:05.450 | 99.99th=[ 215] 00:20:05.450 bw ( KiB/s): min=129024, max=145408, per=15.90%, avg=140211.20, stdev=3315.12, samples=20 00:20:05.450 iops : min= 504, max= 568, avg=547.70, stdev=12.95, samples=20 00:20:05.450 lat (msec) : 50=0.22%, 100=0.52%, 250=99.26% 00:20:05.450 cpu : usr=1.17%, sys=1.58%, ctx=6501, majf=0, minf=1 00:20:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:20:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:05.450 issued rwts: total=0,5540,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:05.450 job9: (groupid=0, jobs=1): err= 0: pid=77933: Sat Sep 28 08:57:42 2024 00:20:05.450 write: IOPS=348, BW=87.1MiB/s (91.4MB/s)(883MiB/10138msec); 0 zone resets 00:20:05.450 slat (usec): min=15, max=121816, avg=2824.93, stdev=5356.25 00:20:05.450 clat (msec): min=123, max=319, avg=180.76, stdev=13.18 00:20:05.450 lat (msec): min=123, max=319, avg=183.58, stdev=12.46 00:20:05.450 clat percentiles (msec): 00:20:05.450 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:20:05.450 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 182], 00:20:05.450 | 70.00th=[ 184], 80.00th=[ 186], 90.00th=[ 190], 95.00th=[ 194], 00:20:05.450 | 99.00th=[ 228], 99.50th=[ 271], 99.90th=[ 309], 99.95th=[ 321], 00:20:05.450 | 99.99th=[ 321] 00:20:05.450 bw ( KiB/s): min=69632, max=94208, per=10.08%, avg=88832.00, stdev=5132.11, samples=20 00:20:05.450 iops : min= 272, max= 368, avg=347.00, stdev=20.05, samples=20 00:20:05.450 lat (msec) : 250=99.29%, 500=0.71% 00:20:05.450 cpu : usr=0.61%, sys=1.04%, ctx=1827, majf=0, minf=1 00:20:05.450 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:20:05.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.450 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:05.450 issued rwts: total=0,3533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:05.450 job10: (groupid=0, jobs=1): err= 0: pid=77934: Sat Sep 28 08:57:42 2024 00:20:05.450 write: IOPS=169, BW=42.4MiB/s (44.5MB/s)(435MiB/10250msec); 0 zone resets 00:20:05.450 slat (usec): min=20, max=38128, avg=5751.25, stdev=10143.26 00:20:05.450 clat (msec): min=23, max=617, avg=371.04, stdev=50.37 00:20:05.450 lat (msec): min=23, max=617, avg=376.79, stdev=50.28 00:20:05.451 clat percentiles (msec): 00:20:05.451 | 1.00th=[ 111], 5.00th=[ 309], 10.00th=[ 351], 20.00th=[ 363], 00:20:05.451 | 30.00th=[ 368], 40.00th=[ 380], 50.00th=[ 384], 60.00th=[ 384], 00:20:05.451 | 70.00th=[ 388], 80.00th=[ 388], 90.00th=[ 393], 95.00th=[ 393], 00:20:05.451 | 99.00th=[ 518], 99.50th=[ 567], 99.90th=[ 617], 99.95th=[ 617], 00:20:05.451 | 99.99th=[ 617] 00:20:05.451 bw ( KiB/s): min=40960, max=49152, per=4.87%, avg=42905.60, stdev=1816.66, samples=20 00:20:05.451 iops : min= 160, max= 192, avg=167.60, stdev= 7.10, samples=20 00:20:05.451 lat (msec) : 50=0.46%, 100=0.46%, 250=1.90%, 500=96.15%, 750=1.03% 00:20:05.451 cpu : usr=0.34%, sys=0.56%, ctx=2147, majf=0, minf=1 00:20:05.451 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:20:05.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.451 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:20:05.451 issued rwts: total=0,1740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.451 latency : target=0, window=0, percentile=100.00%, depth=64 00:20:05.451 00:20:05.451 Run status group 0 (all jobs): 00:20:05.451 WRITE: bw=861MiB/s (903MB/s), 41.4MiB/s-137MiB/s (43.4MB/s-144MB/s), io=8826MiB (9254MB), run=10103-10250msec 00:20:05.451 00:20:05.451 Disk stats (read/write): 00:20:05.451 nvme0n1: ios=50/10931, merge=0/0, ticks=54/1213138, in_queue=1213192, util=97.89% 00:20:05.451 nvme10n1: ios=49/6984, merge=0/0, ticks=50/1209546, in_queue=1209596, util=97.93% 00:20:05.451 nvme1n1: ios=43/8335, merge=0/0, ticks=36/1210786, in_queue=1210822, util=98.00% 00:20:05.451 nvme2n1: ios=40/8398, merge=0/0, ticks=41/1211500, in_queue=1211541, util=98.18% 00:20:05.451 nvme3n1: ios=35/3442, merge=0/0, ticks=72/1238652, in_queue=1238724, util=98.35% 00:20:05.451 nvme4n1: ios=0/3453, merge=0/0, ticks=0/1238233, in_queue=1238233, util=98.23% 00:20:05.451 nvme5n1: ios=0/3422, merge=0/0, ticks=0/1239101, in_queue=1239101, util=98.39% 00:20:05.451 nvme6n1: ios=0/3376, merge=0/0, ticks=0/1238003, in_queue=1238003, util=98.37% 00:20:05.451 nvme7n1: ios=0/10931, merge=0/0, ticks=0/1213802, in_queue=1213802, util=98.69% 00:20:05.451 nvme8n1: ios=0/6921, merge=0/0, ticks=0/1209363, in_queue=1209363, util=98.75% 00:20:05.451 nvme9n1: ios=0/3471, merge=0/0, ticks=0/1239872, in_queue=1239872, util=99.05% 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:05.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:20:05.451 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:20:05.451 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:20:05.451 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:20:05.451 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:20:05.451 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:20:05.451 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.452 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:05.452 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.452 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:05.452 08:57:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:20:05.452 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:20:05.452 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:20:05.452 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:20:05.452 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:05.452 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:20:05.711 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:05.711 rmmod nvme_tcp 00:20:05.711 rmmod nvme_fabrics 00:20:05.711 rmmod nvme_keyring 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:20:05.711 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 77242 ']' 00:20:05.712 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 77242 00:20:05.712 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 77242 ']' 00:20:05.712 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 77242 00:20:05.712 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:20:05.712 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.712 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77242 00:20:05.712 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:05.712 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:05.712 killing process with pid 77242 00:20:05.712 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77242' 00:20:05.712 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 77242 00:20:05.712 08:57:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 77242 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:20:09.005 00:20:09.005 real 0m52.564s 00:20:09.005 user 2m59.721s 00:20:09.005 sys 0m25.321s 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:09.005 ************************************ 00:20:09.005 END TEST nvmf_multiconnection 00:20:09.005 ************************************ 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:09.005 ************************************ 00:20:09.005 START TEST nvmf_initiator_timeout 00:20:09.005 ************************************ 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:20:09.005 * Looking for test storage... 00:20:09.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:09.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.005 --rc genhtml_branch_coverage=1 00:20:09.005 --rc genhtml_function_coverage=1 00:20:09.005 --rc genhtml_legend=1 00:20:09.005 --rc geninfo_all_blocks=1 00:20:09.005 --rc geninfo_unexecuted_blocks=1 00:20:09.005 00:20:09.005 ' 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:09.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.005 --rc genhtml_branch_coverage=1 00:20:09.005 --rc genhtml_function_coverage=1 00:20:09.005 --rc genhtml_legend=1 00:20:09.005 --rc geninfo_all_blocks=1 00:20:09.005 --rc geninfo_unexecuted_blocks=1 00:20:09.005 00:20:09.005 ' 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:09.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.005 --rc genhtml_branch_coverage=1 00:20:09.005 --rc genhtml_function_coverage=1 00:20:09.005 --rc genhtml_legend=1 00:20:09.005 --rc geninfo_all_blocks=1 00:20:09.005 --rc geninfo_unexecuted_blocks=1 00:20:09.005 00:20:09.005 ' 00:20:09.005 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:09.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.005 --rc genhtml_branch_coverage=1 00:20:09.005 --rc genhtml_function_coverage=1 00:20:09.006 --rc genhtml_legend=1 00:20:09.006 --rc geninfo_all_blocks=1 00:20:09.006 --rc geninfo_unexecuted_blocks=1 00:20:09.006 00:20:09.006 ' 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:09.006 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:09.006 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:09.007 Cannot find device "nvmf_init_br" 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:09.007 Cannot find device "nvmf_init_br2" 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:09.007 Cannot find device "nvmf_tgt_br" 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:09.007 Cannot find device "nvmf_tgt_br2" 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:09.007 Cannot find device "nvmf_init_br" 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:09.007 Cannot find device "nvmf_init_br2" 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:09.007 Cannot find device "nvmf_tgt_br" 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:09.007 Cannot find device "nvmf_tgt_br2" 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:09.007 Cannot find device "nvmf_br" 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:09.007 Cannot find device "nvmf_init_if" 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:09.007 Cannot find device "nvmf_init_if2" 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:09.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:09.007 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:09.007 08:57:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:09.267 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:09.267 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:20:09.267 00:20:09.267 --- 10.0.0.3 ping statistics --- 00:20:09.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.267 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:09.267 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:09.267 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:20:09.267 00:20:09.267 --- 10.0.0.4 ping statistics --- 00:20:09.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.267 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:09.267 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.267 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:09.267 00:20:09.267 --- 10.0.0.1 ping statistics --- 00:20:09.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.267 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:09.267 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.267 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:20:09.267 00:20:09.267 --- 10.0.0.2 ping statistics --- 00:20:09.267 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.267 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # return 0 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=78374 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 78374 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 78374 ']' 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:09.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.267 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.268 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:09.268 08:57:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:09.527 [2024-09-28 08:57:47.383199] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:09.527 [2024-09-28 08:57:47.383394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.786 [2024-09-28 08:57:47.565723] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:10.045 [2024-09-28 08:57:47.815524] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.045 [2024-09-28 08:57:47.815599] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.045 [2024-09-28 08:57:47.815632] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.045 [2024-09-28 08:57:47.815644] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.045 [2024-09-28 08:57:47.815658] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.045 [2024-09-28 08:57:47.815905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.045 [2024-09-28 08:57:47.816047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.045 [2024-09-28 08:57:47.816858] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.045 [2024-09-28 08:57:47.816859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.045 [2024-09-28 08:57:47.988490] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:10.613 Malloc0 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:10.613 Delay0 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:10.613 [2024-09-28 08:57:48.454025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:10.613 [2024-09-28 08:57:48.486316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.613 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:20:10.872 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:20:10.872 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:20:10.872 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:20:10.872 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:20:10.872 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:20:12.775 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:20:12.775 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:20:12.775 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:20:12.775 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:20:12.775 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:20:12.775 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:20:12.775 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=78438 00:20:12.775 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:20:12.775 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:20:12.775 [global] 00:20:12.775 thread=1 00:20:12.775 invalidate=1 00:20:12.775 rw=write 00:20:12.775 time_based=1 00:20:12.775 runtime=60 00:20:12.775 ioengine=libaio 00:20:12.775 direct=1 00:20:12.775 bs=4096 00:20:12.775 iodepth=1 00:20:12.775 norandommap=0 00:20:12.775 numjobs=1 00:20:12.775 00:20:12.775 verify_dump=1 00:20:12.775 verify_backlog=512 00:20:12.775 verify_state_save=0 00:20:12.775 do_verify=1 00:20:12.775 verify=crc32c-intel 00:20:12.775 [job0] 00:20:12.775 filename=/dev/nvme0n1 00:20:12.775 Could not set queue depth (nvme0n1) 00:20:13.033 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:13.034 fio-3.35 00:20:13.034 Starting 1 thread 00:20:16.321 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:16.322 true 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:16.322 true 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:16.322 true 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:16.322 true 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.322 08:57:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:18.855 true 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:18.855 true 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:18.855 true 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:18.855 true 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:20:18.855 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 78438 00:21:15.084 00:21:15.084 job0: (groupid=0, jobs=1): err= 0: pid=78459: Sat Sep 28 08:58:50 2024 00:21:15.084 read: IOPS=708, BW=2836KiB/s (2904kB/s)(166MiB/60001msec) 00:21:15.084 slat (usec): min=10, max=10046, avg=13.84, stdev=61.42 00:21:15.084 clat (usec): min=196, max=40854k, avg=1198.11, stdev=198089.60 00:21:15.084 lat (usec): min=209, max=40854k, avg=1211.95, stdev=198089.60 00:21:15.084 clat percentiles (usec): 00:21:15.084 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:21:15.084 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 239], 00:21:15.084 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 281], 00:21:15.084 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 400], 99.95th=[ 523], 00:21:15.084 | 99.99th=[ 1467] 00:21:15.084 write: IOPS=716, BW=2867KiB/s (2936kB/s)(168MiB/60001msec); 0 zone resets 00:21:15.084 slat (usec): min=13, max=1150, avg=19.06, stdev= 7.77 00:21:15.084 clat (usec): min=141, max=2557, avg=174.47, stdev=27.50 00:21:15.084 lat (usec): min=159, max=2589, avg=193.53, stdev=29.41 00:21:15.084 clat percentiles (usec): 00:21:15.084 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:21:15.084 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 176], 00:21:15.084 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 204], 95.00th=[ 217], 00:21:15.084 | 99.00th=[ 243], 99.50th=[ 253], 99.90th=[ 314], 99.95th=[ 478], 00:21:15.084 | 99.99th=[ 676] 00:21:15.084 bw ( KiB/s): min= 4096, max=10208, per=100.00%, avg=8838.74, stdev=1078.30, samples=38 00:21:15.084 iops : min= 1024, max= 2552, avg=2209.68, stdev=269.57, samples=38 00:21:15.084 lat (usec) : 250=88.13%, 500=11.81%, 750=0.04%, 1000=0.01% 00:21:15.084 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:21:15.084 cpu : usr=0.46%, sys=1.87%, ctx=85551, majf=0, minf=5 00:21:15.084 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:15.084 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.084 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.084 issued rwts: total=42534,43008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.084 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:15.084 00:21:15.084 Run status group 0 (all jobs): 00:21:15.084 READ: bw=2836KiB/s (2904kB/s), 2836KiB/s-2836KiB/s (2904kB/s-2904kB/s), io=166MiB (174MB), run=60001-60001msec 00:21:15.084 WRITE: bw=2867KiB/s (2936kB/s), 2867KiB/s-2867KiB/s (2936kB/s-2936kB/s), io=168MiB (176MB), run=60001-60001msec 00:21:15.084 00:21:15.084 Disk stats (read/write): 00:21:15.084 nvme0n1: ios=42710/42539, merge=0/0, ticks=10445/7852, in_queue=18297, util=99.57% 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:15.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:15.084 nvmf hotplug test: fio successful as expected 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.084 08:58:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:15.084 rmmod nvme_tcp 00:21:15.084 rmmod nvme_fabrics 00:21:15.084 rmmod nvme_keyring 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 78374 ']' 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 78374 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 78374 ']' 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 78374 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78374 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:15.084 killing process with pid 78374 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78374' 00:21:15.084 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 78374 00:21:15.085 08:58:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 78374 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:21:15.085 00:21:15.085 real 1m5.857s 00:21:15.085 user 3m53.817s 00:21:15.085 sys 0m22.722s 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:21:15.085 ************************************ 00:21:15.085 END TEST nvmf_initiator_timeout 00:21:15.085 ************************************ 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:21:15.085 00:21:15.085 real 7m40.718s 00:21:15.085 user 18m39.714s 00:21:15.085 sys 1m53.451s 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:15.085 08:58:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:15.085 ************************************ 00:21:15.085 END TEST nvmf_target_extra 00:21:15.085 ************************************ 00:21:15.085 08:58:52 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:15.085 08:58:52 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:15.085 08:58:52 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:15.085 08:58:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:15.085 ************************************ 00:21:15.085 START TEST nvmf_host 00:21:15.085 ************************************ 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:15.085 * Looking for test storage... 00:21:15.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:15.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.085 --rc genhtml_branch_coverage=1 00:21:15.085 --rc genhtml_function_coverage=1 00:21:15.085 --rc genhtml_legend=1 00:21:15.085 --rc geninfo_all_blocks=1 00:21:15.085 --rc geninfo_unexecuted_blocks=1 00:21:15.085 00:21:15.085 ' 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:15.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.085 --rc genhtml_branch_coverage=1 00:21:15.085 --rc genhtml_function_coverage=1 00:21:15.085 --rc genhtml_legend=1 00:21:15.085 --rc geninfo_all_blocks=1 00:21:15.085 --rc geninfo_unexecuted_blocks=1 00:21:15.085 00:21:15.085 ' 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:15.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.085 --rc genhtml_branch_coverage=1 00:21:15.085 --rc genhtml_function_coverage=1 00:21:15.085 --rc genhtml_legend=1 00:21:15.085 --rc geninfo_all_blocks=1 00:21:15.085 --rc geninfo_unexecuted_blocks=1 00:21:15.085 00:21:15.085 ' 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:15.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.085 --rc genhtml_branch_coverage=1 00:21:15.085 --rc genhtml_function_coverage=1 00:21:15.085 --rc genhtml_legend=1 00:21:15.085 --rc geninfo_all_blocks=1 00:21:15.085 --rc geninfo_unexecuted_blocks=1 00:21:15.085 00:21:15.085 ' 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:15.085 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:15.086 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.086 ************************************ 00:21:15.086 START TEST nvmf_identify 00:21:15.086 ************************************ 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:15.086 * Looking for test storage... 00:21:15.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:15.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.086 --rc genhtml_branch_coverage=1 00:21:15.086 --rc genhtml_function_coverage=1 00:21:15.086 --rc genhtml_legend=1 00:21:15.086 --rc geninfo_all_blocks=1 00:21:15.086 --rc geninfo_unexecuted_blocks=1 00:21:15.086 00:21:15.086 ' 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:15.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.086 --rc genhtml_branch_coverage=1 00:21:15.086 --rc genhtml_function_coverage=1 00:21:15.086 --rc genhtml_legend=1 00:21:15.086 --rc geninfo_all_blocks=1 00:21:15.086 --rc geninfo_unexecuted_blocks=1 00:21:15.086 00:21:15.086 ' 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:15.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.086 --rc genhtml_branch_coverage=1 00:21:15.086 --rc genhtml_function_coverage=1 00:21:15.086 --rc genhtml_legend=1 00:21:15.086 --rc geninfo_all_blocks=1 00:21:15.086 --rc geninfo_unexecuted_blocks=1 00:21:15.086 00:21:15.086 ' 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:15.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:15.086 --rc genhtml_branch_coverage=1 00:21:15.086 --rc genhtml_function_coverage=1 00:21:15.086 --rc genhtml_legend=1 00:21:15.086 --rc geninfo_all_blocks=1 00:21:15.086 --rc geninfo_unexecuted_blocks=1 00:21:15.086 00:21:15.086 ' 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.086 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:15.087 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:15.087 08:58:52 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:15.087 Cannot find device "nvmf_init_br" 00:21:15.087 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:21:15.087 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:15.087 Cannot find device "nvmf_init_br2" 00:21:15.087 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:21:15.087 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:15.087 Cannot find device "nvmf_tgt_br" 00:21:15.087 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:21:15.087 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:15.087 Cannot find device "nvmf_tgt_br2" 00:21:15.087 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:21:15.087 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:15.087 Cannot find device "nvmf_init_br" 00:21:15.087 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:21:15.087 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:15.087 Cannot find device "nvmf_init_br2" 00:21:15.087 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:21:15.087 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:15.087 Cannot find device "nvmf_tgt_br" 00:21:15.087 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:21:15.087 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:15.347 Cannot find device "nvmf_tgt_br2" 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:15.347 Cannot find device "nvmf_br" 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:15.347 Cannot find device "nvmf_init_if" 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:15.347 Cannot find device "nvmf_init_if2" 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:15.347 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:15.347 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:15.347 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:15.347 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:21:15.347 00:21:15.347 --- 10.0.0.3 ping statistics --- 00:21:15.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.347 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:21:15.347 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:15.606 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:15.606 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:21:15.606 00:21:15.606 --- 10.0.0.4 ping statistics --- 00:21:15.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.606 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:15.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:21:15.606 00:21:15.606 --- 10.0.0.1 ping statistics --- 00:21:15.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.606 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:15.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:21:15.606 00:21:15.606 --- 10.0.0.2 ping statistics --- 00:21:15.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.606 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=79390 00:21:15.606 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:15.607 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:15.607 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 79390 00:21:15.607 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 79390 ']' 00:21:15.607 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.607 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:15.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.607 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.607 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:15.607 08:58:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:15.607 [2024-09-28 08:58:53.509853] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:15.607 [2024-09-28 08:58:53.510014] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.866 [2024-09-28 08:58:53.686602] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:16.125 [2024-09-28 08:58:53.896724] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.125 [2024-09-28 08:58:53.896805] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.125 [2024-09-28 08:58:53.896851] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.125 [2024-09-28 08:58:53.896864] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.125 [2024-09-28 08:58:53.896875] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.125 [2024-09-28 08:58:53.897046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.125 [2024-09-28 08:58:53.897964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.125 [2024-09-28 08:58:53.898076] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.125 [2024-09-28 08:58:53.898092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:16.125 [2024-09-28 08:58:54.054947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:16.693 [2024-09-28 08:58:54.472696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:16.693 Malloc0 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:16.693 [2024-09-28 08:58:54.602630] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.693 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:16.693 [ 00:21:16.693 { 00:21:16.693 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:16.693 "subtype": "Discovery", 00:21:16.693 "listen_addresses": [ 00:21:16.693 { 00:21:16.693 "trtype": "TCP", 00:21:16.693 "adrfam": "IPv4", 00:21:16.693 "traddr": "10.0.0.3", 00:21:16.693 "trsvcid": "4420" 00:21:16.693 } 00:21:16.693 ], 00:21:16.693 "allow_any_host": true, 00:21:16.693 "hosts": [] 00:21:16.693 }, 00:21:16.694 { 00:21:16.694 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.694 "subtype": "NVMe", 00:21:16.694 "listen_addresses": [ 00:21:16.694 { 00:21:16.694 "trtype": "TCP", 00:21:16.694 "adrfam": "IPv4", 00:21:16.694 "traddr": "10.0.0.3", 00:21:16.694 "trsvcid": "4420" 00:21:16.694 } 00:21:16.694 ], 00:21:16.694 "allow_any_host": true, 00:21:16.694 "hosts": [], 00:21:16.694 "serial_number": "SPDK00000000000001", 00:21:16.694 "model_number": "SPDK bdev Controller", 00:21:16.694 "max_namespaces": 32, 00:21:16.694 "min_cntlid": 1, 00:21:16.694 "max_cntlid": 65519, 00:21:16.694 "namespaces": [ 00:21:16.694 { 00:21:16.694 "nsid": 1, 00:21:16.694 "bdev_name": "Malloc0", 00:21:16.694 "name": "Malloc0", 00:21:16.694 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:16.694 "eui64": "ABCDEF0123456789", 00:21:16.694 "uuid": "ef9020ae-cfc4-4ae1-897d-8483e59663eb" 00:21:16.694 } 00:21:16.694 ] 00:21:16.694 } 00:21:16.694 ] 00:21:16.694 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.694 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:16.956 [2024-09-28 08:58:54.690437] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:16.956 [2024-09-28 08:58:54.690555] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79432 ] 00:21:16.956 [2024-09-28 08:58:54.857065] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:16.957 [2024-09-28 08:58:54.857251] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:16.957 [2024-09-28 08:58:54.857266] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:16.957 [2024-09-28 08:58:54.857300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:16.957 [2024-09-28 08:58:54.857318] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:16.957 [2024-09-28 08:58:54.857753] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:16.957 [2024-09-28 08:58:54.861954] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:21:16.957 [2024-09-28 08:58:54.869890] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:16.957 [2024-09-28 08:58:54.869937] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:16.957 [2024-09-28 08:58:54.869948] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:16.957 [2024-09-28 08:58:54.869954] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:16.957 [2024-09-28 08:58:54.870060] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.870081] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.870090] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:16.957 [2024-09-28 08:58:54.870122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:16.957 [2024-09-28 08:58:54.870185] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:16.957 [2024-09-28 08:58:54.876876] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.957 [2024-09-28 08:58:54.876925] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.957 [2024-09-28 08:58:54.876934] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.876943] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:16.957 [2024-09-28 08:58:54.876982] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:16.957 [2024-09-28 08:58:54.877004] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:16.957 [2024-09-28 08:58:54.877015] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:16.957 [2024-09-28 08:58:54.877062] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.877077] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.877084] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:16.957 [2024-09-28 08:58:54.877116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.957 [2024-09-28 08:58:54.877171] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:16.957 [2024-09-28 08:58:54.877298] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.957 [2024-09-28 08:58:54.877313] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.957 [2024-09-28 08:58:54.877320] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.877327] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:16.957 [2024-09-28 08:58:54.877338] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:16.957 [2024-09-28 08:58:54.877354] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:16.957 [2024-09-28 08:58:54.877368] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.877375] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.877382] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:16.957 [2024-09-28 08:58:54.877398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.957 [2024-09-28 08:58:54.877431] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:16.957 [2024-09-28 08:58:54.877491] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.957 [2024-09-28 08:58:54.877504] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.957 [2024-09-28 08:58:54.877511] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.877517] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:16.957 [2024-09-28 08:58:54.877527] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:16.957 [2024-09-28 08:58:54.877540] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:16.957 [2024-09-28 08:58:54.877555] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.877564] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.877571] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:16.957 [2024-09-28 08:58:54.877583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.957 [2024-09-28 08:58:54.877608] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:16.957 [2024-09-28 08:58:54.877669] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.957 [2024-09-28 08:58:54.877680] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.957 [2024-09-28 08:58:54.877685] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.877692] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:16.957 [2024-09-28 08:58:54.877701] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:16.957 [2024-09-28 08:58:54.877717] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.877725] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.877735] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:16.957 [2024-09-28 08:58:54.877750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.957 [2024-09-28 08:58:54.877775] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:16.957 [2024-09-28 08:58:54.877834] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.957 [2024-09-28 08:58:54.877845] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.957 [2024-09-28 08:58:54.877851] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.877857] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:16.957 [2024-09-28 08:58:54.877866] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:16.957 [2024-09-28 08:58:54.877917] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:16.957 [2024-09-28 08:58:54.877935] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:16.957 [2024-09-28 08:58:54.878044] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:16.957 [2024-09-28 08:58:54.878053] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:16.957 [2024-09-28 08:58:54.878067] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.878075] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.878085] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:16.957 [2024-09-28 08:58:54.878098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.957 [2024-09-28 08:58:54.878127] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:16.957 [2024-09-28 08:58:54.878199] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.957 [2024-09-28 08:58:54.878210] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.957 [2024-09-28 08:58:54.878216] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.957 [2024-09-28 08:58:54.878223] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:16.957 [2024-09-28 08:58:54.878232] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:16.957 [2024-09-28 08:58:54.878249] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.878257] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.878278] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:16.958 [2024-09-28 08:58:54.878291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.958 [2024-09-28 08:58:54.878321] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:16.958 [2024-09-28 08:58:54.878367] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.958 [2024-09-28 08:58:54.878379] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.958 [2024-09-28 08:58:54.878384] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.878390] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:16.958 [2024-09-28 08:58:54.878399] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:16.958 [2024-09-28 08:58:54.878411] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:16.958 [2024-09-28 08:58:54.878434] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:16.958 [2024-09-28 08:58:54.878450] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:16.958 [2024-09-28 08:58:54.878473] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.878482] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:16.958 [2024-09-28 08:58:54.878495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.958 [2024-09-28 08:58:54.878524] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:16.958 [2024-09-28 08:58:54.878633] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:16.958 [2024-09-28 08:58:54.878647] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:16.958 [2024-09-28 08:58:54.878653] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.878661] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:21:16.958 [2024-09-28 08:58:54.878669] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:21:16.958 [2024-09-28 08:58:54.878676] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.878692] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.878712] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.878739] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.958 [2024-09-28 08:58:54.878750] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.958 [2024-09-28 08:58:54.878755] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.878762] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:16.958 [2024-09-28 08:58:54.878791] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:16.958 [2024-09-28 08:58:54.878818] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:16.958 [2024-09-28 08:58:54.878827] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:16.958 [2024-09-28 08:58:54.878835] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:16.958 [2024-09-28 08:58:54.878843] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:16.958 [2024-09-28 08:58:54.878852] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:16.958 [2024-09-28 08:58:54.878868] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:16.958 [2024-09-28 08:58:54.878887] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.878895] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.878904] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:16.958 [2024-09-28 08:58:54.878919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:16.958 [2024-09-28 08:58:54.878950] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:16.958 [2024-09-28 08:58:54.879034] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.958 [2024-09-28 08:58:54.879046] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.958 [2024-09-28 08:58:54.879054] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.879061] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:16.958 [2024-09-28 08:58:54.879076] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.879085] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.879091] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:16.958 [2024-09-28 08:58:54.879113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.958 [2024-09-28 08:58:54.879130] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.879137] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.879142] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:21:16.958 [2024-09-28 08:58:54.879152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.958 [2024-09-28 08:58:54.879160] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.879166] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.879177] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:21:16.958 [2024-09-28 08:58:54.879188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.958 [2024-09-28 08:58:54.879197] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.879203] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.879208] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.958 [2024-09-28 08:58:54.879218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.958 [2024-09-28 08:58:54.879225] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:16.958 [2024-09-28 08:58:54.879246] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:16.958 [2024-09-28 08:58:54.879257] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.879264] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:16.958 [2024-09-28 08:58:54.879275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.958 [2024-09-28 08:58:54.879306] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:16.958 [2024-09-28 08:58:54.879318] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:21:16.958 [2024-09-28 08:58:54.879331] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:21:16.958 [2024-09-28 08:58:54.879339] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.958 [2024-09-28 08:58:54.879346] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:16.958 [2024-09-28 08:58:54.879450] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.958 [2024-09-28 08:58:54.879461] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.958 [2024-09-28 08:58:54.879466] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.879473] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:16.958 [2024-09-28 08:58:54.879482] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:16.958 [2024-09-28 08:58:54.879491] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:16.958 [2024-09-28 08:58:54.879512] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.879520] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:16.958 [2024-09-28 08:58:54.879543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.958 [2024-09-28 08:58:54.879571] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:16.958 [2024-09-28 08:58:54.879651] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:16.958 [2024-09-28 08:58:54.879664] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:16.958 [2024-09-28 08:58:54.879670] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:16.958 [2024-09-28 08:58:54.879686] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:21:16.958 [2024-09-28 08:58:54.879693] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:21:16.958 [2024-09-28 08:58:54.879700] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.879716] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.879724] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.879737] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.959 [2024-09-28 08:58:54.879746] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.959 [2024-09-28 08:58:54.879751] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.879761] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:16.959 [2024-09-28 08:58:54.879788] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:16.959 [2024-09-28 08:58:54.879869] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.879885] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:16.959 [2024-09-28 08:58:54.879899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.959 [2024-09-28 08:58:54.879914] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.879922] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.879928] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:21:16.959 [2024-09-28 08:58:54.879941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:16.959 [2024-09-28 08:58:54.879974] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:16.959 [2024-09-28 08:58:54.879992] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:21:16.959 [2024-09-28 08:58:54.880218] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:16.959 [2024-09-28 08:58:54.880241] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:16.959 [2024-09-28 08:58:54.880249] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.880256] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:21:16.959 [2024-09-28 08:58:54.880263] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:21:16.959 [2024-09-28 08:58:54.880271] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.880282] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.880289] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.880304] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.959 [2024-09-28 08:58:54.880314] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.959 [2024-09-28 08:58:54.880319] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.880326] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:21:16.959 [2024-09-28 08:58:54.880351] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.959 [2024-09-28 08:58:54.880362] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.959 [2024-09-28 08:58:54.880367] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.880376] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:16.959 [2024-09-28 08:58:54.880407] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.880422] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:16.959 [2024-09-28 08:58:54.880438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.959 [2024-09-28 08:58:54.880474] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:16.959 [2024-09-28 08:58:54.880575] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:16.959 [2024-09-28 08:58:54.880586] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:16.959 [2024-09-28 08:58:54.880592] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.880598] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:21:16.959 [2024-09-28 08:58:54.880605] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:21:16.959 [2024-09-28 08:58:54.880611] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.880622] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.880634] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.880646] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.959 [2024-09-28 08:58:54.880655] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.959 [2024-09-28 08:58:54.880661] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.880667] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:16.959 [2024-09-28 08:58:54.880689] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.880698] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:16.959 [2024-09-28 08:58:54.880710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.959 [2024-09-28 08:58:54.880742] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:16.959 [2024-09-28 08:58:54.884862] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:16.959 [2024-09-28 08:58:54.884908] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:16.959 [2024-09-28 08:58:54.884917] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.884925] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:21:16.959 [2024-09-28 08:58:54.884933] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:21:16.959 [2024-09-28 08:58:54.884941] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.884953] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.884961] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.884976] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.959 [2024-09-28 08:58:54.884987] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.959 [2024-09-28 08:58:54.884993] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.959 [2024-09-28 08:58:54.885001] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:16.959 ===================================================== 00:21:16.959 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:16.959 ===================================================== 00:21:16.959 Controller Capabilities/Features 00:21:16.959 ================================ 00:21:16.959 Vendor ID: 0000 00:21:16.959 Subsystem Vendor ID: 0000 00:21:16.959 Serial Number: .................... 00:21:16.959 Model Number: ........................................ 00:21:16.959 Firmware Version: 25.01 00:21:16.959 Recommended Arb Burst: 0 00:21:16.959 IEEE OUI Identifier: 00 00 00 00:21:16.959 Multi-path I/O 00:21:16.959 May have multiple subsystem ports: No 00:21:16.959 May have multiple controllers: No 00:21:16.959 Associated with SR-IOV VF: No 00:21:16.959 Max Data Transfer Size: 131072 00:21:16.959 Max Number of Namespaces: 0 00:21:16.959 Max Number of I/O Queues: 1024 00:21:16.959 NVMe Specification Version (VS): 1.3 00:21:16.959 NVMe Specification Version (Identify): 1.3 00:21:16.959 Maximum Queue Entries: 128 00:21:16.959 Contiguous Queues Required: Yes 00:21:16.959 Arbitration Mechanisms Supported 00:21:16.959 Weighted Round Robin: Not Supported 00:21:16.959 Vendor Specific: Not Supported 00:21:16.959 Reset Timeout: 15000 ms 00:21:16.959 Doorbell Stride: 4 bytes 00:21:16.959 NVM Subsystem Reset: Not Supported 00:21:16.959 Command Sets Supported 00:21:16.959 NVM Command Set: Supported 00:21:16.959 Boot Partition: Not Supported 00:21:16.959 Memory Page Size Minimum: 4096 bytes 00:21:16.959 Memory Page Size Maximum: 4096 bytes 00:21:16.959 Persistent Memory Region: Not Supported 00:21:16.959 Optional Asynchronous Events Supported 00:21:16.959 Namespace Attribute Notices: Not Supported 00:21:16.959 Firmware Activation Notices: Not Supported 00:21:16.959 ANA Change Notices: Not Supported 00:21:16.959 PLE Aggregate Log Change Notices: Not Supported 00:21:16.959 LBA Status Info Alert Notices: Not Supported 00:21:16.959 EGE Aggregate Log Change Notices: Not Supported 00:21:16.959 Normal NVM Subsystem Shutdown event: Not Supported 00:21:16.959 Zone Descriptor Change Notices: Not Supported 00:21:16.959 Discovery Log Change Notices: Supported 00:21:16.959 Controller Attributes 00:21:16.960 128-bit Host Identifier: Not Supported 00:21:16.960 Non-Operational Permissive Mode: Not Supported 00:21:16.960 NVM Sets: Not Supported 00:21:16.960 Read Recovery Levels: Not Supported 00:21:16.960 Endurance Groups: Not Supported 00:21:16.960 Predictable Latency Mode: Not Supported 00:21:16.960 Traffic Based Keep ALive: Not Supported 00:21:16.960 Namespace Granularity: Not Supported 00:21:16.960 SQ Associations: Not Supported 00:21:16.960 UUID List: Not Supported 00:21:16.960 Multi-Domain Subsystem: Not Supported 00:21:16.960 Fixed Capacity Management: Not Supported 00:21:16.960 Variable Capacity Management: Not Supported 00:21:16.960 Delete Endurance Group: Not Supported 00:21:16.960 Delete NVM Set: Not Supported 00:21:16.960 Extended LBA Formats Supported: Not Supported 00:21:16.960 Flexible Data Placement Supported: Not Supported 00:21:16.960 00:21:16.960 Controller Memory Buffer Support 00:21:16.960 ================================ 00:21:16.960 Supported: No 00:21:16.960 00:21:16.960 Persistent Memory Region Support 00:21:16.960 ================================ 00:21:16.960 Supported: No 00:21:16.960 00:21:16.960 Admin Command Set Attributes 00:21:16.960 ============================ 00:21:16.960 Security Send/Receive: Not Supported 00:21:16.960 Format NVM: Not Supported 00:21:16.960 Firmware Activate/Download: Not Supported 00:21:16.960 Namespace Management: Not Supported 00:21:16.960 Device Self-Test: Not Supported 00:21:16.960 Directives: Not Supported 00:21:16.960 NVMe-MI: Not Supported 00:21:16.960 Virtualization Management: Not Supported 00:21:16.960 Doorbell Buffer Config: Not Supported 00:21:16.960 Get LBA Status Capability: Not Supported 00:21:16.960 Command & Feature Lockdown Capability: Not Supported 00:21:16.960 Abort Command Limit: 1 00:21:16.960 Async Event Request Limit: 4 00:21:16.960 Number of Firmware Slots: N/A 00:21:16.960 Firmware Slot 1 Read-Only: N/A 00:21:16.960 Firmware Activation Without Reset: N/A 00:21:16.960 Multiple Update Detection Support: N/A 00:21:16.960 Firmware Update Granularity: No Information Provided 00:21:16.960 Per-Namespace SMART Log: No 00:21:16.960 Asymmetric Namespace Access Log Page: Not Supported 00:21:16.960 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:16.960 Command Effects Log Page: Not Supported 00:21:16.960 Get Log Page Extended Data: Supported 00:21:16.960 Telemetry Log Pages: Not Supported 00:21:16.960 Persistent Event Log Pages: Not Supported 00:21:16.960 Supported Log Pages Log Page: May Support 00:21:16.960 Commands Supported & Effects Log Page: Not Supported 00:21:16.960 Feature Identifiers & Effects Log Page:May Support 00:21:16.960 NVMe-MI Commands & Effects Log Page: May Support 00:21:16.960 Data Area 4 for Telemetry Log: Not Supported 00:21:16.960 Error Log Page Entries Supported: 128 00:21:16.960 Keep Alive: Not Supported 00:21:16.960 00:21:16.960 NVM Command Set Attributes 00:21:16.960 ========================== 00:21:16.960 Submission Queue Entry Size 00:21:16.960 Max: 1 00:21:16.960 Min: 1 00:21:16.960 Completion Queue Entry Size 00:21:16.960 Max: 1 00:21:16.960 Min: 1 00:21:16.960 Number of Namespaces: 0 00:21:16.960 Compare Command: Not Supported 00:21:16.960 Write Uncorrectable Command: Not Supported 00:21:16.960 Dataset Management Command: Not Supported 00:21:16.960 Write Zeroes Command: Not Supported 00:21:16.960 Set Features Save Field: Not Supported 00:21:16.960 Reservations: Not Supported 00:21:16.960 Timestamp: Not Supported 00:21:16.960 Copy: Not Supported 00:21:16.960 Volatile Write Cache: Not Present 00:21:16.960 Atomic Write Unit (Normal): 1 00:21:16.960 Atomic Write Unit (PFail): 1 00:21:16.960 Atomic Compare & Write Unit: 1 00:21:16.960 Fused Compare & Write: Supported 00:21:16.960 Scatter-Gather List 00:21:16.960 SGL Command Set: Supported 00:21:16.960 SGL Keyed: Supported 00:21:16.960 SGL Bit Bucket Descriptor: Not Supported 00:21:16.960 SGL Metadata Pointer: Not Supported 00:21:16.960 Oversized SGL: Not Supported 00:21:16.960 SGL Metadata Address: Not Supported 00:21:16.960 SGL Offset: Supported 00:21:16.960 Transport SGL Data Block: Not Supported 00:21:16.960 Replay Protected Memory Block: Not Supported 00:21:16.960 00:21:16.960 Firmware Slot Information 00:21:16.960 ========================= 00:21:16.960 Active slot: 0 00:21:16.960 00:21:16.960 00:21:16.960 Error Log 00:21:16.960 ========= 00:21:16.960 00:21:16.960 Active Namespaces 00:21:16.960 ================= 00:21:16.960 Discovery Log Page 00:21:16.960 ================== 00:21:16.960 Generation Counter: 2 00:21:16.960 Number of Records: 2 00:21:16.960 Record Format: 0 00:21:16.960 00:21:16.960 Discovery Log Entry 0 00:21:16.960 ---------------------- 00:21:16.960 Transport Type: 3 (TCP) 00:21:16.960 Address Family: 1 (IPv4) 00:21:16.960 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:16.960 Entry Flags: 00:21:16.960 Duplicate Returned Information: 1 00:21:16.960 Explicit Persistent Connection Support for Discovery: 1 00:21:16.960 Transport Requirements: 00:21:16.960 Secure Channel: Not Required 00:21:16.960 Port ID: 0 (0x0000) 00:21:16.960 Controller ID: 65535 (0xffff) 00:21:16.960 Admin Max SQ Size: 128 00:21:16.960 Transport Service Identifier: 4420 00:21:16.960 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:16.960 Transport Address: 10.0.0.3 00:21:16.960 Discovery Log Entry 1 00:21:16.960 ---------------------- 00:21:16.960 Transport Type: 3 (TCP) 00:21:16.960 Address Family: 1 (IPv4) 00:21:16.960 Subsystem Type: 2 (NVM Subsystem) 00:21:16.960 Entry Flags: 00:21:16.960 Duplicate Returned Information: 0 00:21:16.960 Explicit Persistent Connection Support for Discovery: 0 00:21:16.960 Transport Requirements: 00:21:16.960 Secure Channel: Not Required 00:21:16.960 Port ID: 0 (0x0000) 00:21:16.960 Controller ID: 65535 (0xffff) 00:21:16.960 Admin Max SQ Size: 128 00:21:16.960 Transport Service Identifier: 4420 00:21:16.960 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:16.960 Transport Address: 10.0.0.3 [2024-09-28 08:58:54.885232] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:16.960 [2024-09-28 08:58:54.885256] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:16.960 [2024-09-28 08:58:54.885269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.960 [2024-09-28 08:58:54.885279] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:21:16.960 [2024-09-28 08:58:54.885287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.960 [2024-09-28 08:58:54.885294] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:21:16.960 [2024-09-28 08:58:54.885302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.960 [2024-09-28 08:58:54.885309] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.960 [2024-09-28 08:58:54.885317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:16.960 [2024-09-28 08:58:54.885335] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.960 [2024-09-28 08:58:54.885345] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.960 [2024-09-28 08:58:54.885352] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.960 [2024-09-28 08:58:54.885366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.960 [2024-09-28 08:58:54.885399] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.960 [2024-09-28 08:58:54.885469] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.960 [2024-09-28 08:58:54.885482] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.960 [2024-09-28 08:58:54.885491] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.885499] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.961 [2024-09-28 08:58:54.885514] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.885524] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.885531] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.961 [2024-09-28 08:58:54.885544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.961 [2024-09-28 08:58:54.885575] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.961 [2024-09-28 08:58:54.885667] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.961 [2024-09-28 08:58:54.885678] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.961 [2024-09-28 08:58:54.885684] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.885690] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.961 [2024-09-28 08:58:54.885698] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:16.961 [2024-09-28 08:58:54.885707] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:16.961 [2024-09-28 08:58:54.885723] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.885730] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.885747] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.961 [2024-09-28 08:58:54.885767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.961 [2024-09-28 08:58:54.885796] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.961 [2024-09-28 08:58:54.885862] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.961 [2024-09-28 08:58:54.885875] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.961 [2024-09-28 08:58:54.885881] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.885887] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.961 [2024-09-28 08:58:54.885908] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.885915] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.885921] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.961 [2024-09-28 08:58:54.885933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.961 [2024-09-28 08:58:54.885959] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.961 [2024-09-28 08:58:54.886012] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.961 [2024-09-28 08:58:54.886023] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.961 [2024-09-28 08:58:54.886029] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886037] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.961 [2024-09-28 08:58:54.886054] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886062] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886067] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.961 [2024-09-28 08:58:54.886079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.961 [2024-09-28 08:58:54.886102] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.961 [2024-09-28 08:58:54.886171] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.961 [2024-09-28 08:58:54.886182] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.961 [2024-09-28 08:58:54.886188] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886194] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.961 [2024-09-28 08:58:54.886209] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886217] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886222] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.961 [2024-09-28 08:58:54.886233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.961 [2024-09-28 08:58:54.886256] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.961 [2024-09-28 08:58:54.886314] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.961 [2024-09-28 08:58:54.886331] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.961 [2024-09-28 08:58:54.886337] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886344] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.961 [2024-09-28 08:58:54.886360] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886367] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886373] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.961 [2024-09-28 08:58:54.886384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.961 [2024-09-28 08:58:54.886407] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.961 [2024-09-28 08:58:54.886459] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.961 [2024-09-28 08:58:54.886470] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.961 [2024-09-28 08:58:54.886476] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886482] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.961 [2024-09-28 08:58:54.886497] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886504] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886510] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.961 [2024-09-28 08:58:54.886524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.961 [2024-09-28 08:58:54.886548] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.961 [2024-09-28 08:58:54.886605] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.961 [2024-09-28 08:58:54.886615] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.961 [2024-09-28 08:58:54.886621] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886627] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.961 [2024-09-28 08:58:54.886649] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886657] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886662] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.961 [2024-09-28 08:58:54.886674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.961 [2024-09-28 08:58:54.886697] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.961 [2024-09-28 08:58:54.886754] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.961 [2024-09-28 08:58:54.886765] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.961 [2024-09-28 08:58:54.886780] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886788] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.961 [2024-09-28 08:58:54.886818] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886843] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.886866] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.961 [2024-09-28 08:58:54.886878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.961 [2024-09-28 08:58:54.886905] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.961 [2024-09-28 08:58:54.886990] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.961 [2024-09-28 08:58:54.887003] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.961 [2024-09-28 08:58:54.887009] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.887015] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.961 [2024-09-28 08:58:54.887033] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.887041] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.887047] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.961 [2024-09-28 08:58:54.887062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.961 [2024-09-28 08:58:54.887088] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.961 [2024-09-28 08:58:54.887143] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.961 [2024-09-28 08:58:54.887155] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.961 [2024-09-28 08:58:54.887161] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.887167] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.961 [2024-09-28 08:58:54.887198] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.961 [2024-09-28 08:58:54.887205] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887211] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.962 [2024-09-28 08:58:54.887222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.962 [2024-09-28 08:58:54.887245] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.962 [2024-09-28 08:58:54.887301] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.962 [2024-09-28 08:58:54.887327] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.962 [2024-09-28 08:58:54.887333] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887339] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.962 [2024-09-28 08:58:54.887354] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887361] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887367] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.962 [2024-09-28 08:58:54.887378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.962 [2024-09-28 08:58:54.887407] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.962 [2024-09-28 08:58:54.887464] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.962 [2024-09-28 08:58:54.887475] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.962 [2024-09-28 08:58:54.887480] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887486] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.962 [2024-09-28 08:58:54.887502] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887509] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887518] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.962 [2024-09-28 08:58:54.887530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.962 [2024-09-28 08:58:54.887553] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.962 [2024-09-28 08:58:54.887606] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.962 [2024-09-28 08:58:54.887617] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.962 [2024-09-28 08:58:54.887622] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887628] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.962 [2024-09-28 08:58:54.887646] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887654] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887659] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.962 [2024-09-28 08:58:54.887670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.962 [2024-09-28 08:58:54.887693] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.962 [2024-09-28 08:58:54.887749] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.962 [2024-09-28 08:58:54.887760] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.962 [2024-09-28 08:58:54.887770] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887776] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.962 [2024-09-28 08:58:54.887792] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887799] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887805] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.962 [2024-09-28 08:58:54.887816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.962 [2024-09-28 08:58:54.887853] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.962 [2024-09-28 08:58:54.887924] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.962 [2024-09-28 08:58:54.887935] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.962 [2024-09-28 08:58:54.887941] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887947] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.962 [2024-09-28 08:58:54.887963] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887969] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.887975] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.962 [2024-09-28 08:58:54.887987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.962 [2024-09-28 08:58:54.888010] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.962 [2024-09-28 08:58:54.888067] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.962 [2024-09-28 08:58:54.888078] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.962 [2024-09-28 08:58:54.888083] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.888089] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.962 [2024-09-28 08:58:54.888104] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.888112] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.888117] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.962 [2024-09-28 08:58:54.888128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.962 [2024-09-28 08:58:54.888155] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.962 [2024-09-28 08:58:54.888205] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.962 [2024-09-28 08:58:54.888215] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.962 [2024-09-28 08:58:54.888221] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.888227] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.962 [2024-09-28 08:58:54.888242] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.888249] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.888258] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.962 [2024-09-28 08:58:54.888274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.962 [2024-09-28 08:58:54.888298] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.962 [2024-09-28 08:58:54.888351] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.962 [2024-09-28 08:58:54.888362] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.962 [2024-09-28 08:58:54.888368] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.962 [2024-09-28 08:58:54.888374] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.963 [2024-09-28 08:58:54.888392] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.963 [2024-09-28 08:58:54.888399] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.963 [2024-09-28 08:58:54.888406] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.963 [2024-09-28 08:58:54.888417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.963 [2024-09-28 08:58:54.888440] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.963 [2024-09-28 08:58:54.888500] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.963 [2024-09-28 08:58:54.888513] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.963 [2024-09-28 08:58:54.888519] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.963 [2024-09-28 08:58:54.888525] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.963 [2024-09-28 08:58:54.888540] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.963 [2024-09-28 08:58:54.888548] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.963 [2024-09-28 08:58:54.888553] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.963 [2024-09-28 08:58:54.888565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.963 [2024-09-28 08:58:54.888588] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.963 [2024-09-28 08:58:54.888654] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.963 [2024-09-28 08:58:54.888665] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.963 [2024-09-28 08:58:54.888670] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.963 [2024-09-28 08:58:54.888676] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.963 [2024-09-28 08:58:54.888692] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.963 [2024-09-28 08:58:54.888699] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.963 [2024-09-28 08:58:54.888705] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.963 [2024-09-28 08:58:54.888716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.963 [2024-09-28 08:58:54.888743] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.963 [2024-09-28 08:58:54.892866] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.963 [2024-09-28 08:58:54.892896] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.963 [2024-09-28 08:58:54.892905] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.963 [2024-09-28 08:58:54.892912] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.963 [2024-09-28 08:58:54.892935] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:16.963 [2024-09-28 08:58:54.892944] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:16.963 [2024-09-28 08:58:54.892951] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:16.963 [2024-09-28 08:58:54.892966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:16.963 [2024-09-28 08:58:54.892999] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:16.963 [2024-09-28 08:58:54.893073] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:16.963 [2024-09-28 08:58:54.893096] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:16.963 [2024-09-28 08:58:54.893104] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:16.963 [2024-09-28 08:58:54.893126] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:16.963 [2024-09-28 08:58:54.893155] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:21:16.963 00:21:16.963 08:58:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:17.225 [2024-09-28 08:58:55.002911] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:17.225 [2024-09-28 08:58:55.003027] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79435 ] 00:21:17.225 [2024-09-28 08:58:55.168371] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:17.225 [2024-09-28 08:58:55.168503] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:17.225 [2024-09-28 08:58:55.168515] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:17.226 [2024-09-28 08:58:55.168536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:17.226 [2024-09-28 08:58:55.168551] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:17.226 [2024-09-28 08:58:55.168970] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:17.226 [2024-09-28 08:58:55.169048] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:21:17.226 [2024-09-28 08:58:55.181886] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:17.226 [2024-09-28 08:58:55.181934] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:17.226 [2024-09-28 08:58:55.181944] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:17.226 [2024-09-28 08:58:55.181950] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:17.226 [2024-09-28 08:58:55.182025] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.182039] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.182048] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:17.226 [2024-09-28 08:58:55.182069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:17.226 [2024-09-28 08:58:55.182108] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:17.226 [2024-09-28 08:58:55.189932] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.226 [2024-09-28 08:58:55.189981] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.226 [2024-09-28 08:58:55.189991] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.190011] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:17.226 [2024-09-28 08:58:55.190037] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:17.226 [2024-09-28 08:58:55.190054] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:17.226 [2024-09-28 08:58:55.190065] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:17.226 [2024-09-28 08:58:55.190087] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.190096] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.190106] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:17.226 [2024-09-28 08:58:55.190123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.226 [2024-09-28 08:58:55.190161] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:17.226 [2024-09-28 08:58:55.190280] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.226 [2024-09-28 08:58:55.190294] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.226 [2024-09-28 08:58:55.190303] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.190310] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:17.226 [2024-09-28 08:58:55.190320] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:17.226 [2024-09-28 08:58:55.190333] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:17.226 [2024-09-28 08:58:55.190345] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.190353] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.190360] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:17.226 [2024-09-28 08:58:55.190376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.226 [2024-09-28 08:58:55.190405] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:17.226 [2024-09-28 08:58:55.190497] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.226 [2024-09-28 08:58:55.190509] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.226 [2024-09-28 08:58:55.190515] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.190521] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:17.226 [2024-09-28 08:58:55.190531] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:17.226 [2024-09-28 08:58:55.190549] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:17.226 [2024-09-28 08:58:55.190564] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.190572] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.190578] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:17.226 [2024-09-28 08:58:55.190592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.226 [2024-09-28 08:58:55.190618] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:17.226 [2024-09-28 08:58:55.190707] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.226 [2024-09-28 08:58:55.190719] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.226 [2024-09-28 08:58:55.190725] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.190734] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:17.226 [2024-09-28 08:58:55.190745] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:17.226 [2024-09-28 08:58:55.190761] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.190769] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.190776] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:17.226 [2024-09-28 08:58:55.190792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.226 [2024-09-28 08:58:55.190834] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:17.226 [2024-09-28 08:58:55.190936] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.226 [2024-09-28 08:58:55.190951] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.226 [2024-09-28 08:58:55.190957] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.190964] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:17.226 [2024-09-28 08:58:55.190973] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:17.226 [2024-09-28 08:58:55.190983] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:17.226 [2024-09-28 08:58:55.191000] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:17.226 [2024-09-28 08:58:55.191110] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:17.226 [2024-09-28 08:58:55.191118] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:17.226 [2024-09-28 08:58:55.191134] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.191142] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.191149] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:17.226 [2024-09-28 08:58:55.191167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.226 [2024-09-28 08:58:55.191211] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:17.226 [2024-09-28 08:58:55.191302] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.226 [2024-09-28 08:58:55.191314] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.226 [2024-09-28 08:58:55.191320] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.191326] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:17.226 [2024-09-28 08:58:55.191335] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:17.226 [2024-09-28 08:58:55.191352] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.191359] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.191366] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:17.226 [2024-09-28 08:58:55.191382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.226 [2024-09-28 08:58:55.191411] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:17.226 [2024-09-28 08:58:55.191484] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.226 [2024-09-28 08:58:55.191495] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.226 [2024-09-28 08:58:55.191501] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.191507] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:17.226 [2024-09-28 08:58:55.191516] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:17.226 [2024-09-28 08:58:55.191528] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:17.226 [2024-09-28 08:58:55.191557] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:17.226 [2024-09-28 08:58:55.191574] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:17.226 [2024-09-28 08:58:55.191595] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.191604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:17.226 [2024-09-28 08:58:55.191618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.226 [2024-09-28 08:58:55.191647] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:17.226 [2024-09-28 08:58:55.191790] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:17.226 [2024-09-28 08:58:55.191805] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:17.226 [2024-09-28 08:58:55.191828] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:17.226 [2024-09-28 08:58:55.191851] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:21:17.227 [2024-09-28 08:58:55.191877] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:21:17.227 [2024-09-28 08:58:55.191889] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.191903] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.191911] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.191929] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.227 [2024-09-28 08:58:55.191939] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.227 [2024-09-28 08:58:55.191944] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.191952] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:17.227 [2024-09-28 08:58:55.191969] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:17.227 [2024-09-28 08:58:55.191979] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:17.227 [2024-09-28 08:58:55.191987] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:17.227 [2024-09-28 08:58:55.192000] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:17.227 [2024-09-28 08:58:55.192009] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:17.227 [2024-09-28 08:58:55.192019] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:17.227 [2024-09-28 08:58:55.192034] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:17.227 [2024-09-28 08:58:55.192047] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192055] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192062] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:17.227 [2024-09-28 08:58:55.192081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:17.227 [2024-09-28 08:58:55.192112] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:17.227 [2024-09-28 08:58:55.192207] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.227 [2024-09-28 08:58:55.192235] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.227 [2024-09-28 08:58:55.192241] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192247] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:17.227 [2024-09-28 08:58:55.192264] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192272] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192279] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:21:17.227 [2024-09-28 08:58:55.192294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.227 [2024-09-28 08:58:55.192307] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192316] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192322] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:21:17.227 [2024-09-28 08:58:55.192332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.227 [2024-09-28 08:58:55.192341] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192347] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192353] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:21:17.227 [2024-09-28 08:58:55.192362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.227 [2024-09-28 08:58:55.192371] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192377] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192383] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:17.227 [2024-09-28 08:58:55.192392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.227 [2024-09-28 08:58:55.192400] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:17.227 [2024-09-28 08:58:55.192417] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:17.227 [2024-09-28 08:58:55.192429] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192435] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:17.227 [2024-09-28 08:58:55.192450] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.227 [2024-09-28 08:58:55.192481] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:17.227 [2024-09-28 08:58:55.192492] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:21:17.227 [2024-09-28 08:58:55.192499] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:21:17.227 [2024-09-28 08:58:55.192506] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:17.227 [2024-09-28 08:58:55.192513] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:17.227 [2024-09-28 08:58:55.192640] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.227 [2024-09-28 08:58:55.192652] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.227 [2024-09-28 08:58:55.192658] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192665] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:17.227 [2024-09-28 08:58:55.192674] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:17.227 [2024-09-28 08:58:55.192686] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:17.227 [2024-09-28 08:58:55.192702] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:17.227 [2024-09-28 08:58:55.192713] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:17.227 [2024-09-28 08:58:55.192724] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192731] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192738] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:17.227 [2024-09-28 08:58:55.192750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:17.227 [2024-09-28 08:58:55.192787] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:17.227 [2024-09-28 08:58:55.192903] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.227 [2024-09-28 08:58:55.192919] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.227 [2024-09-28 08:58:55.192925] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.192933] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:17.227 [2024-09-28 08:58:55.193029] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:17.227 [2024-09-28 08:58:55.193055] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:17.227 [2024-09-28 08:58:55.193073] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.193082] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:17.227 [2024-09-28 08:58:55.193097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.227 [2024-09-28 08:58:55.193129] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:17.227 [2024-09-28 08:58:55.193284] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:17.227 [2024-09-28 08:58:55.193299] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:17.227 [2024-09-28 08:58:55.193306] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.193312] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:21:17.227 [2024-09-28 08:58:55.193319] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:21:17.227 [2024-09-28 08:58:55.193326] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.193340] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.193347] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.193359] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.227 [2024-09-28 08:58:55.193368] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.227 [2024-09-28 08:58:55.193373] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.193380] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:17.227 [2024-09-28 08:58:55.193415] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:17.227 [2024-09-28 08:58:55.193434] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:17.227 [2024-09-28 08:58:55.193455] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:17.227 [2024-09-28 08:58:55.193472] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.193480] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:17.227 [2024-09-28 08:58:55.193498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.227 [2024-09-28 08:58:55.193526] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:17.227 [2024-09-28 08:58:55.193639] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:17.227 [2024-09-28 08:58:55.193650] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:17.227 [2024-09-28 08:58:55.193656] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.193662] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:21:17.227 [2024-09-28 08:58:55.193669] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:21:17.227 [2024-09-28 08:58:55.193676] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.227 [2024-09-28 08:58:55.193690] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.193698] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.193709] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.228 [2024-09-28 08:58:55.193718] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.228 [2024-09-28 08:58:55.193724] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.193731] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:17.228 [2024-09-28 08:58:55.193766] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:17.228 [2024-09-28 08:58:55.193787] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:17.228 [2024-09-28 08:58:55.193804] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.193812] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:17.228 [2024-09-28 08:58:55.193846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.228 [2024-09-28 08:58:55.197879] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:17.228 [2024-09-28 08:58:55.197973] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:17.228 [2024-09-28 08:58:55.197987] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:17.228 [2024-09-28 08:58:55.197994] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198000] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:21:17.228 [2024-09-28 08:58:55.198007] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:21:17.228 [2024-09-28 08:58:55.198024] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198039] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198046] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198070] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.228 [2024-09-28 08:58:55.198081] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.228 [2024-09-28 08:58:55.198086] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198093] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:17.228 [2024-09-28 08:58:55.198125] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:17.228 [2024-09-28 08:58:55.198141] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:17.228 [2024-09-28 08:58:55.198154] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:17.228 [2024-09-28 08:58:55.198164] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:17.228 [2024-09-28 08:58:55.198177] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:17.228 [2024-09-28 08:58:55.198186] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:17.228 [2024-09-28 08:58:55.198194] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:17.228 [2024-09-28 08:58:55.198201] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:17.228 [2024-09-28 08:58:55.198210] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:17.228 [2024-09-28 08:58:55.198242] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198252] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:17.228 [2024-09-28 08:58:55.198267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.228 [2024-09-28 08:58:55.198278] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198285] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198292] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:21:17.228 [2024-09-28 08:58:55.198306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:17.228 [2024-09-28 08:58:55.198342] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:17.228 [2024-09-28 08:58:55.198354] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:21:17.228 [2024-09-28 08:58:55.198451] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.228 [2024-09-28 08:58:55.198463] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.228 [2024-09-28 08:58:55.198469] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198476] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:17.228 [2024-09-28 08:58:55.198488] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.228 [2024-09-28 08:58:55.198497] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.228 [2024-09-28 08:58:55.198502] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198508] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:21:17.228 [2024-09-28 08:58:55.198532] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198541] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:21:17.228 [2024-09-28 08:58:55.198553] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.228 [2024-09-28 08:58:55.198579] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:21:17.228 [2024-09-28 08:58:55.198662] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.228 [2024-09-28 08:58:55.198674] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.228 [2024-09-28 08:58:55.198679] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198686] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:21:17.228 [2024-09-28 08:58:55.198701] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198709] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:21:17.228 [2024-09-28 08:58:55.198721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.228 [2024-09-28 08:58:55.198745] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:21:17.228 [2024-09-28 08:58:55.198849] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.228 [2024-09-28 08:58:55.198868] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.228 [2024-09-28 08:58:55.198875] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198882] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:21:17.228 [2024-09-28 08:58:55.198898] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.198906] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:21:17.228 [2024-09-28 08:58:55.198921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.228 [2024-09-28 08:58:55.198948] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:21:17.228 [2024-09-28 08:58:55.199048] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.228 [2024-09-28 08:58:55.199060] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.228 [2024-09-28 08:58:55.199066] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.199072] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:21:17.228 [2024-09-28 08:58:55.199103] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.199115] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:21:17.228 [2024-09-28 08:58:55.199128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.228 [2024-09-28 08:58:55.199142] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.199149] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:21:17.228 [2024-09-28 08:58:55.199163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.228 [2024-09-28 08:58:55.199175] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.199182] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:21:17.228 [2024-09-28 08:58:55.199194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.228 [2024-09-28 08:58:55.199208] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.199215] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:21:17.228 [2024-09-28 08:58:55.199229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.228 [2024-09-28 08:58:55.199257] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:21:17.228 [2024-09-28 08:58:55.199269] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:21:17.228 [2024-09-28 08:58:55.199277] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:21:17.228 [2024-09-28 08:58:55.199284] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:21:17.228 [2024-09-28 08:58:55.199556] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:17.228 [2024-09-28 08:58:55.199579] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:17.228 [2024-09-28 08:58:55.199587] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.199594] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:21:17.228 [2024-09-28 08:58:55.199606] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:21:17.228 [2024-09-28 08:58:55.199613] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.199644] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.199653] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:17.228 [2024-09-28 08:58:55.199662] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:17.228 [2024-09-28 08:58:55.199671] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:17.228 [2024-09-28 08:58:55.199676] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199682] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:21:17.229 [2024-09-28 08:58:55.199689] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:21:17.229 [2024-09-28 08:58:55.199696] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199708] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199719] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199727] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:17.229 [2024-09-28 08:58:55.199735] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:17.229 [2024-09-28 08:58:55.199741] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199747] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:21:17.229 [2024-09-28 08:58:55.199753] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:21:17.229 [2024-09-28 08:58:55.199760] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199771] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199777] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199785] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:17.229 [2024-09-28 08:58:55.199798] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:17.229 [2024-09-28 08:58:55.199824] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199848] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:21:17.229 [2024-09-28 08:58:55.199855] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:21:17.229 [2024-09-28 08:58:55.199861] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199872] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199878] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199887] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.229 [2024-09-28 08:58:55.199895] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.229 [2024-09-28 08:58:55.199901] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199908] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:21:17.229 [2024-09-28 08:58:55.199938] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.229 [2024-09-28 08:58:55.199949] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.229 [2024-09-28 08:58:55.199954] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199961] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:21:17.229 ===================================================== 00:21:17.229 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:17.229 ===================================================== 00:21:17.229 Controller Capabilities/Features 00:21:17.229 ================================ 00:21:17.229 Vendor ID: 8086 00:21:17.229 Subsystem Vendor ID: 8086 00:21:17.229 Serial Number: SPDK00000000000001 00:21:17.229 Model Number: SPDK bdev Controller 00:21:17.229 Firmware Version: 25.01 00:21:17.229 Recommended Arb Burst: 6 00:21:17.229 IEEE OUI Identifier: e4 d2 5c 00:21:17.229 Multi-path I/O 00:21:17.229 May have multiple subsystem ports: Yes 00:21:17.229 May have multiple controllers: Yes 00:21:17.229 Associated with SR-IOV VF: No 00:21:17.229 Max Data Transfer Size: 131072 00:21:17.229 Max Number of Namespaces: 32 00:21:17.229 Max Number of I/O Queues: 127 00:21:17.229 NVMe Specification Version (VS): 1.3 00:21:17.229 NVMe Specification Version (Identify): 1.3 00:21:17.229 Maximum Queue Entries: 128 00:21:17.229 Contiguous Queues Required: Yes 00:21:17.229 Arbitration Mechanisms Supported 00:21:17.229 Weighted Round Robin: Not Supported 00:21:17.229 Vendor Specific: Not Supported 00:21:17.229 Reset Timeout: 15000 ms 00:21:17.229 Doorbell Stride: 4 bytes 00:21:17.229 NVM Subsystem Reset: Not Supported 00:21:17.229 Command Sets Supported 00:21:17.229 NVM Command Set: Supported 00:21:17.229 Boot Partition: Not Supported 00:21:17.229 Memory Page Size Minimum: 4096 bytes 00:21:17.229 Memory Page Size Maximum: 4096 bytes 00:21:17.229 Persistent Memory Region: Not Supported 00:21:17.229 Optional Asynchronous Events Supported 00:21:17.229 Namespace Attribute Notices: Supported 00:21:17.229 Firmware Activation Notices: Not Supported 00:21:17.229 ANA Change Notices: Not Supported 00:21:17.229 PLE Aggregate Log Change Notices: Not Supported 00:21:17.229 LBA Status Info Alert Notices: Not Supported 00:21:17.229 EGE Aggregate Log Change Notices: Not Supported 00:21:17.229 Normal NVM Subsystem Shutdown event: Not Supported 00:21:17.229 Zone Descriptor Change Notices: Not Supported 00:21:17.229 Discovery Log Change Notices: Not Supported 00:21:17.229 Controller Attributes 00:21:17.229 128-bit Host Identifier: Supported 00:21:17.229 Non-Operational Permissive Mode: Not Supported 00:21:17.229 NVM Sets: Not Supported 00:21:17.229 Read Recovery Levels: Not Supported 00:21:17.229 Endurance Groups: Not Supported 00:21:17.229 Predictable Latency Mode: Not Supported 00:21:17.229 Traffic Based Keep ALive: Not Supported 00:21:17.229 Namespace Granularity: Not Supported 00:21:17.229 SQ Associations: Not Supported 00:21:17.229 UUID List: Not Supported 00:21:17.229 Multi-Domain Subsystem: Not Supported 00:21:17.229 Fixed Capacity Management: Not Supported 00:21:17.229 Variable Capacity Management: Not Supported 00:21:17.229 Delete Endurance Group: Not Supported 00:21:17.229 Delete NVM Set: Not Supported 00:21:17.229 Extended LBA Formats Supported: Not Supported 00:21:17.229 Flexible Data Placement Supported: Not Supported 00:21:17.229 00:21:17.229 Controller Memory Buffer Support 00:21:17.229 ================================ 00:21:17.229 Supported: No 00:21:17.229 00:21:17.229 Persistent Memory Region Support 00:21:17.229 ================================ 00:21:17.229 Supported: No 00:21:17.229 00:21:17.229 Admin Command Set Attributes 00:21:17.229 ============================ 00:21:17.229 Security Send/Receive: Not Supported 00:21:17.229 Format NVM: Not Supported 00:21:17.229 Firmware Activate/Download: Not Supported 00:21:17.229 Namespace Management: Not Supported 00:21:17.229 Device Self-Test: Not Supported 00:21:17.229 Directives: Not Supported 00:21:17.229 NVMe-MI: Not Supported 00:21:17.229 Virtualization Management: Not Supported 00:21:17.229 Doorbell Buffer Config: Not Supported 00:21:17.229 Get LBA Status Capability: Not Supported 00:21:17.229 Command & Feature Lockdown Capability: Not Supported 00:21:17.229 Abort Command Limit: 4 00:21:17.229 Async Event Request Limit: 4 00:21:17.229 Number of Firmware Slots: N/A 00:21:17.229 Firmware Slot 1 Read-Only: N/A 00:21:17.229 Firmware Activation Without Reset: [2024-09-28 08:58:55.199975] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.229 [2024-09-28 08:58:55.199985] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.229 [2024-09-28 08:58:55.199990] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.199997] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:21:17.229 [2024-09-28 08:58:55.200008] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.229 [2024-09-28 08:58:55.200017] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.229 [2024-09-28 08:58:55.200023] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.229 [2024-09-28 08:58:55.200031] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:21:17.229 N/A 00:21:17.229 Multiple Update Detection Support: N/A 00:21:17.229 Firmware Update Granularity: No Information Provided 00:21:17.229 Per-Namespace SMART Log: No 00:21:17.229 Asymmetric Namespace Access Log Page: Not Supported 00:21:17.229 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:17.229 Command Effects Log Page: Supported 00:21:17.229 Get Log Page Extended Data: Supported 00:21:17.229 Telemetry Log Pages: Not Supported 00:21:17.229 Persistent Event Log Pages: Not Supported 00:21:17.229 Supported Log Pages Log Page: May Support 00:21:17.229 Commands Supported & Effects Log Page: Not Supported 00:21:17.229 Feature Identifiers & Effects Log Page:May Support 00:21:17.229 NVMe-MI Commands & Effects Log Page: May Support 00:21:17.229 Data Area 4 for Telemetry Log: Not Supported 00:21:17.229 Error Log Page Entries Supported: 128 00:21:17.229 Keep Alive: Supported 00:21:17.229 Keep Alive Granularity: 10000 ms 00:21:17.229 00:21:17.229 NVM Command Set Attributes 00:21:17.229 ========================== 00:21:17.229 Submission Queue Entry Size 00:21:17.229 Max: 64 00:21:17.229 Min: 64 00:21:17.229 Completion Queue Entry Size 00:21:17.229 Max: 16 00:21:17.229 Min: 16 00:21:17.229 Number of Namespaces: 32 00:21:17.229 Compare Command: Supported 00:21:17.229 Write Uncorrectable Command: Not Supported 00:21:17.229 Dataset Management Command: Supported 00:21:17.229 Write Zeroes Command: Supported 00:21:17.229 Set Features Save Field: Not Supported 00:21:17.229 Reservations: Supported 00:21:17.229 Timestamp: Not Supported 00:21:17.229 Copy: Supported 00:21:17.229 Volatile Write Cache: Present 00:21:17.229 Atomic Write Unit (Normal): 1 00:21:17.229 Atomic Write Unit (PFail): 1 00:21:17.229 Atomic Compare & Write Unit: 1 00:21:17.229 Fused Compare & Write: Supported 00:21:17.229 Scatter-Gather List 00:21:17.229 SGL Command Set: Supported 00:21:17.230 SGL Keyed: Supported 00:21:17.230 SGL Bit Bucket Descriptor: Not Supported 00:21:17.230 SGL Metadata Pointer: Not Supported 00:21:17.230 Oversized SGL: Not Supported 00:21:17.230 SGL Metadata Address: Not Supported 00:21:17.230 SGL Offset: Supported 00:21:17.230 Transport SGL Data Block: Not Supported 00:21:17.230 Replay Protected Memory Block: Not Supported 00:21:17.230 00:21:17.230 Firmware Slot Information 00:21:17.230 ========================= 00:21:17.230 Active slot: 1 00:21:17.230 Slot 1 Firmware Revision: 25.01 00:21:17.230 00:21:17.230 00:21:17.230 Commands Supported and Effects 00:21:17.230 ============================== 00:21:17.230 Admin Commands 00:21:17.230 -------------- 00:21:17.230 Get Log Page (02h): Supported 00:21:17.230 Identify (06h): Supported 00:21:17.230 Abort (08h): Supported 00:21:17.230 Set Features (09h): Supported 00:21:17.230 Get Features (0Ah): Supported 00:21:17.230 Asynchronous Event Request (0Ch): Supported 00:21:17.230 Keep Alive (18h): Supported 00:21:17.230 I/O Commands 00:21:17.230 ------------ 00:21:17.230 Flush (00h): Supported LBA-Change 00:21:17.230 Write (01h): Supported LBA-Change 00:21:17.230 Read (02h): Supported 00:21:17.230 Compare (05h): Supported 00:21:17.230 Write Zeroes (08h): Supported LBA-Change 00:21:17.230 Dataset Management (09h): Supported LBA-Change 00:21:17.230 Copy (19h): Supported LBA-Change 00:21:17.230 00:21:17.230 Error Log 00:21:17.230 ========= 00:21:17.230 00:21:17.230 Arbitration 00:21:17.230 =========== 00:21:17.230 Arbitration Burst: 1 00:21:17.230 00:21:17.230 Power Management 00:21:17.230 ================ 00:21:17.230 Number of Power States: 1 00:21:17.230 Current Power State: Power State #0 00:21:17.230 Power State #0: 00:21:17.230 Max Power: 0.00 W 00:21:17.230 Non-Operational State: Operational 00:21:17.230 Entry Latency: Not Reported 00:21:17.230 Exit Latency: Not Reported 00:21:17.230 Relative Read Throughput: 0 00:21:17.230 Relative Read Latency: 0 00:21:17.230 Relative Write Throughput: 0 00:21:17.230 Relative Write Latency: 0 00:21:17.230 Idle Power: Not Reported 00:21:17.230 Active Power: Not Reported 00:21:17.230 Non-Operational Permissive Mode: Not Supported 00:21:17.230 00:21:17.230 Health Information 00:21:17.230 ================== 00:21:17.230 Critical Warnings: 00:21:17.230 Available Spare Space: OK 00:21:17.230 Temperature: OK 00:21:17.230 Device Reliability: OK 00:21:17.230 Read Only: No 00:21:17.230 Volatile Memory Backup: OK 00:21:17.230 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:17.230 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:17.230 Available Spare: 0% 00:21:17.230 Available Spare Threshold: 0% 00:21:17.230 Life Percentage Used:[2024-09-28 08:58:55.200204] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.230 [2024-09-28 08:58:55.200216] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:21:17.230 [2024-09-28 08:58:55.200230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.230 [2024-09-28 08:58:55.200262] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:21:17.230 [2024-09-28 08:58:55.200337] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.230 [2024-09-28 08:58:55.200353] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.230 [2024-09-28 08:58:55.200359] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.230 [2024-09-28 08:58:55.200366] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:21:17.230 [2024-09-28 08:58:55.200433] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:17.230 [2024-09-28 08:58:55.200460] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:21:17.230 [2024-09-28 08:58:55.200473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.230 [2024-09-28 08:58:55.200482] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:21:17.230 [2024-09-28 08:58:55.200490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.230 [2024-09-28 08:58:55.200497] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:21:17.230 [2024-09-28 08:58:55.200509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.230 [2024-09-28 08:58:55.200517] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:17.230 [2024-09-28 08:58:55.200525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:17.230 [2024-09-28 08:58:55.200558] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.230 [2024-09-28 08:58:55.200566] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.230 [2024-09-28 08:58:55.200573] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:17.230 [2024-09-28 08:58:55.200587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.230 [2024-09-28 08:58:55.200623] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:17.230 [2024-09-28 08:58:55.200705] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.230 [2024-09-28 08:58:55.200718] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.230 [2024-09-28 08:58:55.200725] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.230 [2024-09-28 08:58:55.200732] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:17.230 [2024-09-28 08:58:55.200745] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.230 [2024-09-28 08:58:55.200757] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.230 [2024-09-28 08:58:55.200764] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:17.230 [2024-09-28 08:58:55.200804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.230 [2024-09-28 08:58:55.200865] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:17.230 [2024-09-28 08:58:55.200995] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.230 [2024-09-28 08:58:55.201008] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.230 [2024-09-28 08:58:55.201014] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.230 [2024-09-28 08:58:55.201027] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:17.230 [2024-09-28 08:58:55.201037] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:17.230 [2024-09-28 08:58:55.201046] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:17.230 [2024-09-28 08:58:55.201064] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.230 [2024-09-28 08:58:55.201073] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.230 [2024-09-28 08:58:55.201080] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:17.230 [2024-09-28 08:58:55.201098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.230 [2024-09-28 08:58:55.201155] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:17.230 [2024-09-28 08:58:55.201259] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.230 [2024-09-28 08:58:55.201271] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.230 [2024-09-28 08:58:55.201279] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.230 [2024-09-28 08:58:55.201285] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:17.230 [2024-09-28 08:58:55.201302] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.230 [2024-09-28 08:58:55.201310] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.230 [2024-09-28 08:58:55.201316] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:17.230 [2024-09-28 08:58:55.201331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.230 [2024-09-28 08:58:55.201357] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:17.230 [2024-09-28 08:58:55.201436] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.230 [2024-09-28 08:58:55.201447] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.231 [2024-09-28 08:58:55.201453] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.231 [2024-09-28 08:58:55.201463] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:17.231 [2024-09-28 08:58:55.201480] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.231 [2024-09-28 08:58:55.201488] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.231 [2024-09-28 08:58:55.201494] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:17.231 [2024-09-28 08:58:55.201506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.231 [2024-09-28 08:58:55.201530] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:17.231 [2024-09-28 08:58:55.201606] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.231 [2024-09-28 08:58:55.201627] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.231 [2024-09-28 08:58:55.201634] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.231 [2024-09-28 08:58:55.201640] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:17.231 [2024-09-28 08:58:55.201658] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.231 [2024-09-28 08:58:55.201665] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.231 [2024-09-28 08:58:55.201671] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:17.231 [2024-09-28 08:58:55.201687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.231 [2024-09-28 08:58:55.201713] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:17.231 [2024-09-28 08:58:55.201787] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.231 [2024-09-28 08:58:55.205891] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.231 [2024-09-28 08:58:55.205913] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.231 [2024-09-28 08:58:55.205922] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:17.231 [2024-09-28 08:58:55.205945] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:17.231 [2024-09-28 08:58:55.205960] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:17.231 [2024-09-28 08:58:55.205967] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:21:17.231 [2024-09-28 08:58:55.205981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:17.231 [2024-09-28 08:58:55.206013] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:21:17.231 [2024-09-28 08:58:55.206111] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:17.231 [2024-09-28 08:58:55.206132] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:17.231 [2024-09-28 08:58:55.206139] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:17.231 [2024-09-28 08:58:55.206145] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:21:17.231 [2024-09-28 08:58:55.206159] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:21:17.489 0% 00:21:17.489 Data Units Read: 0 00:21:17.489 Data Units Written: 0 00:21:17.489 Host Read Commands: 0 00:21:17.489 Host Write Commands: 0 00:21:17.489 Controller Busy Time: 0 minutes 00:21:17.489 Power Cycles: 0 00:21:17.489 Power On Hours: 0 hours 00:21:17.489 Unsafe Shutdowns: 0 00:21:17.489 Unrecoverable Media Errors: 0 00:21:17.489 Lifetime Error Log Entries: 0 00:21:17.489 Warning Temperature Time: 0 minutes 00:21:17.489 Critical Temperature Time: 0 minutes 00:21:17.489 00:21:17.489 Number of Queues 00:21:17.489 ================ 00:21:17.489 Number of I/O Submission Queues: 127 00:21:17.489 Number of I/O Completion Queues: 127 00:21:17.489 00:21:17.489 Active Namespaces 00:21:17.489 ================= 00:21:17.489 Namespace ID:1 00:21:17.489 Error Recovery Timeout: Unlimited 00:21:17.489 Command Set Identifier: NVM (00h) 00:21:17.489 Deallocate: Supported 00:21:17.489 Deallocated/Unwritten Error: Not Supported 00:21:17.489 Deallocated Read Value: Unknown 00:21:17.489 Deallocate in Write Zeroes: Not Supported 00:21:17.489 Deallocated Guard Field: 0xFFFF 00:21:17.489 Flush: Supported 00:21:17.489 Reservation: Supported 00:21:17.489 Namespace Sharing Capabilities: Multiple Controllers 00:21:17.489 Size (in LBAs): 131072 (0GiB) 00:21:17.489 Capacity (in LBAs): 131072 (0GiB) 00:21:17.489 Utilization (in LBAs): 131072 (0GiB) 00:21:17.490 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:17.490 EUI64: ABCDEF0123456789 00:21:17.490 UUID: ef9020ae-cfc4-4ae1-897d-8483e59663eb 00:21:17.490 Thin Provisioning: Not Supported 00:21:17.490 Per-NS Atomic Units: Yes 00:21:17.490 Atomic Boundary Size (Normal): 0 00:21:17.490 Atomic Boundary Size (PFail): 0 00:21:17.490 Atomic Boundary Offset: 0 00:21:17.490 Maximum Single Source Range Length: 65535 00:21:17.490 Maximum Copy Length: 65535 00:21:17.490 Maximum Source Range Count: 1 00:21:17.490 NGUID/EUI64 Never Reused: No 00:21:17.490 Namespace Write Protected: No 00:21:17.490 Number of LBA Formats: 1 00:21:17.490 Current LBA Format: LBA Format #00 00:21:17.490 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:17.490 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.490 rmmod nvme_tcp 00:21:17.490 rmmod nvme_fabrics 00:21:17.490 rmmod nvme_keyring 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 79390 ']' 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 79390 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 79390 ']' 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 79390 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79390 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79390' 00:21:17.490 killing process with pid 79390 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 79390 00:21:17.490 08:58:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 79390 00:21:18.864 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:18.864 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:18.864 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:18.864 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:18.864 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:18.864 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:21:18.864 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:21:18.864 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:18.864 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:21:18.865 00:21:18.865 real 0m4.008s 00:21:18.865 user 0m10.125s 00:21:18.865 sys 0m0.958s 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:18.865 ************************************ 00:21:18.865 END TEST nvmf_identify 00:21:18.865 ************************************ 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.865 ************************************ 00:21:18.865 START TEST nvmf_perf 00:21:18.865 ************************************ 00:21:18.865 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:19.124 * Looking for test storage... 00:21:19.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:19.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.124 --rc genhtml_branch_coverage=1 00:21:19.124 --rc genhtml_function_coverage=1 00:21:19.124 --rc genhtml_legend=1 00:21:19.124 --rc geninfo_all_blocks=1 00:21:19.124 --rc geninfo_unexecuted_blocks=1 00:21:19.124 00:21:19.124 ' 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:19.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.124 --rc genhtml_branch_coverage=1 00:21:19.124 --rc genhtml_function_coverage=1 00:21:19.124 --rc genhtml_legend=1 00:21:19.124 --rc geninfo_all_blocks=1 00:21:19.124 --rc geninfo_unexecuted_blocks=1 00:21:19.124 00:21:19.124 ' 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:19.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.124 --rc genhtml_branch_coverage=1 00:21:19.124 --rc genhtml_function_coverage=1 00:21:19.124 --rc genhtml_legend=1 00:21:19.124 --rc geninfo_all_blocks=1 00:21:19.124 --rc geninfo_unexecuted_blocks=1 00:21:19.124 00:21:19.124 ' 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:19.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.124 --rc genhtml_branch_coverage=1 00:21:19.124 --rc genhtml_function_coverage=1 00:21:19.124 --rc genhtml_legend=1 00:21:19.124 --rc geninfo_all_blocks=1 00:21:19.124 --rc geninfo_unexecuted_blocks=1 00:21:19.124 00:21:19.124 ' 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.124 08:58:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.124 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:21:19.124 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:21:19.124 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.124 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.124 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:19.124 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.124 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:19.124 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.124 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.124 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.124 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.124 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.125 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:19.125 Cannot find device "nvmf_init_br" 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:19.125 Cannot find device "nvmf_init_br2" 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:19.125 Cannot find device "nvmf_tgt_br" 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:19.125 Cannot find device "nvmf_tgt_br2" 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:19.125 Cannot find device "nvmf_init_br" 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:19.125 Cannot find device "nvmf_init_br2" 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:19.125 Cannot find device "nvmf_tgt_br" 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:21:19.125 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:19.384 Cannot find device "nvmf_tgt_br2" 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:19.384 Cannot find device "nvmf_br" 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:19.384 Cannot find device "nvmf_init_if" 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:19.384 Cannot find device "nvmf_init_if2" 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:19.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:19.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:19.384 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:19.385 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:19.385 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:19.385 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:19.385 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:19.385 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:19.385 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:19.385 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:19.385 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:19.385 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:19.385 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:19.385 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:19.385 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:19.385 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:19.644 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:19.644 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:21:19.644 00:21:19.644 --- 10.0.0.3 ping statistics --- 00:21:19.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.644 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:19.644 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:19.644 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:21:19.644 00:21:19.644 --- 10.0.0.4 ping statistics --- 00:21:19.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.644 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:19.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:19.644 00:21:19.644 --- 10.0.0.1 ping statistics --- 00:21:19.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.644 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:19.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:21:19.644 00:21:19.644 --- 10.0.0.2 ping statistics --- 00:21:19.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.644 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=79663 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 79663 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 79663 ']' 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:19.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:19.644 08:58:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:19.644 [2024-09-28 08:58:57.517831] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:19.644 [2024-09-28 08:58:57.518030] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.903 [2024-09-28 08:58:57.676597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.903 [2024-09-28 08:58:57.829022] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.903 [2024-09-28 08:58:57.829094] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.903 [2024-09-28 08:58:57.829112] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.903 [2024-09-28 08:58:57.829124] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.903 [2024-09-28 08:58:57.829137] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.903 [2024-09-28 08:58:57.829362] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.903 [2024-09-28 08:58:57.830147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.903 [2024-09-28 08:58:57.830278] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.903 [2024-09-28 08:58:57.830301] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.162 [2024-09-28 08:58:57.987550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:20.728 08:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:20.729 08:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:21:20.729 08:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:20.729 08:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:20.729 08:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:20.729 08:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.729 08:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:20.729 08:58:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:21:21.296 08:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:21.296 08:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:21:21.556 08:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:21:21.556 08:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:21.815 08:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:21.815 08:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:21:21.815 08:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:21.815 08:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:21.815 08:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:22.074 [2024-09-28 08:58:59.902748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.074 08:58:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:22.332 08:59:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:22.332 08:59:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:22.591 08:59:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:22.591 08:59:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:22.850 08:59:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:23.109 [2024-09-28 08:59:00.891050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:23.109 08:59:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:23.367 08:59:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:23.367 08:59:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:23.367 08:59:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:23.367 08:59:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:24.746 Initializing NVMe Controllers 00:21:24.746 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:24.746 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:24.746 Initialization complete. Launching workers. 00:21:24.746 ======================================================== 00:21:24.746 Latency(us) 00:21:24.746 Device Information : IOPS MiB/s Average min max 00:21:24.746 PCIE (0000:00:10.0) NSID 1 from core 0: 23264.00 90.88 1374.89 366.52 9160.73 00:21:24.746 ======================================================== 00:21:24.746 Total : 23264.00 90.88 1374.89 366.52 9160.73 00:21:24.746 00:21:24.746 08:59:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:26.124 Initializing NVMe Controllers 00:21:26.124 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:26.124 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:26.124 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:26.124 Initialization complete. Launching workers. 00:21:26.124 ======================================================== 00:21:26.124 Latency(us) 00:21:26.124 Device Information : IOPS MiB/s Average min max 00:21:26.124 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2954.98 11.54 337.97 127.94 5151.99 00:21:26.125 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8103.93 5549.89 12048.88 00:21:26.125 ======================================================== 00:21:26.125 Total : 3078.98 12.03 650.72 127.94 12048.88 00:21:26.125 00:21:26.125 08:59:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:27.504 Initializing NVMe Controllers 00:21:27.504 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:27.504 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:27.504 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:27.504 Initialization complete. Launching workers. 00:21:27.504 ======================================================== 00:21:27.504 Latency(us) 00:21:27.504 Device Information : IOPS MiB/s Average min max 00:21:27.504 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7833.98 30.60 4099.60 897.85 10045.82 00:21:27.504 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3980.99 15.55 8076.79 5847.44 15578.36 00:21:27.504 ======================================================== 00:21:27.504 Total : 11814.98 46.15 5439.69 897.85 15578.36 00:21:27.504 00:21:27.504 08:59:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:21:27.504 08:59:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:30.793 Initializing NVMe Controllers 00:21:30.793 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:30.793 Controller IO queue size 128, less than required. 00:21:30.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.793 Controller IO queue size 128, less than required. 00:21:30.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.793 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:30.793 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:30.793 Initialization complete. Launching workers. 00:21:30.793 ======================================================== 00:21:30.793 Latency(us) 00:21:30.793 Device Information : IOPS MiB/s Average min max 00:21:30.793 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1753.82 438.46 76045.19 38844.57 239329.31 00:21:30.793 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 600.94 150.23 219742.89 109745.17 448630.95 00:21:30.793 ======================================================== 00:21:30.793 Total : 2354.76 588.69 112717.08 38844.57 448630.95 00:21:30.793 00:21:30.793 08:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:21:30.793 Initializing NVMe Controllers 00:21:30.793 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:30.794 Controller IO queue size 128, less than required. 00:21:30.794 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.794 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:30.794 Controller IO queue size 128, less than required. 00:21:30.794 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.794 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:21:30.794 WARNING: Some requested NVMe devices were skipped 00:21:30.794 No valid NVMe controllers or AIO or URING devices found 00:21:30.794 08:59:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:21:33.330 Initializing NVMe Controllers 00:21:33.330 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:33.330 Controller IO queue size 128, less than required. 00:21:33.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:33.330 Controller IO queue size 128, less than required. 00:21:33.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:33.330 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:33.330 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:33.330 Initialization complete. Launching workers. 00:21:33.330 00:21:33.330 ==================== 00:21:33.330 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:33.330 TCP transport: 00:21:33.330 polls: 7321 00:21:33.330 idle_polls: 3552 00:21:33.330 sock_completions: 3769 00:21:33.330 nvme_completions: 6007 00:21:33.330 submitted_requests: 8944 00:21:33.330 queued_requests: 1 00:21:33.330 00:21:33.330 ==================== 00:21:33.330 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:33.330 TCP transport: 00:21:33.330 polls: 8046 00:21:33.330 idle_polls: 4588 00:21:33.330 sock_completions: 3458 00:21:33.330 nvme_completions: 5853 00:21:33.330 submitted_requests: 8818 00:21:33.330 queued_requests: 1 00:21:33.330 ======================================================== 00:21:33.330 Latency(us) 00:21:33.330 Device Information : IOPS MiB/s Average min max 00:21:33.330 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1498.29 374.57 89863.18 38470.94 342703.92 00:21:33.330 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1459.87 364.97 88651.59 44433.59 252244.79 00:21:33.330 ======================================================== 00:21:33.330 Total : 2958.16 739.54 89265.25 38470.94 342703.92 00:21:33.330 00:21:33.589 08:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:33.589 08:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:33.848 08:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:21:33.848 08:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:21:33.848 08:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:21:34.106 08:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=d77bd1cc-9b96-42d9-87d7-60eaabfc4fe3 00:21:34.106 08:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb d77bd1cc-9b96-42d9-87d7-60eaabfc4fe3 00:21:34.106 08:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=d77bd1cc-9b96-42d9-87d7-60eaabfc4fe3 00:21:34.106 08:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:21:34.107 08:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:21:34.107 08:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:21:34.107 08:59:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:34.365 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:21:34.365 { 00:21:34.365 "uuid": "d77bd1cc-9b96-42d9-87d7-60eaabfc4fe3", 00:21:34.365 "name": "lvs_0", 00:21:34.365 "base_bdev": "Nvme0n1", 00:21:34.365 "total_data_clusters": 1278, 00:21:34.365 "free_clusters": 1278, 00:21:34.365 "block_size": 4096, 00:21:34.365 "cluster_size": 4194304 00:21:34.365 } 00:21:34.365 ]' 00:21:34.365 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d77bd1cc-9b96-42d9-87d7-60eaabfc4fe3") .free_clusters' 00:21:34.365 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:21:34.365 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d77bd1cc-9b96-42d9-87d7-60eaabfc4fe3") .cluster_size' 00:21:34.365 5112 00:21:34.365 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:21:34.365 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:21:34.365 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:21:34.365 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:21:34.365 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d77bd1cc-9b96-42d9-87d7-60eaabfc4fe3 lbd_0 5112 00:21:34.623 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=d57fdc95-228c-4826-8fd0-dffb7211b636 00:21:34.623 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore d57fdc95-228c-4826-8fd0-dffb7211b636 lvs_n_0 00:21:34.881 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=85aab159-ce67-44f9-bc7e-9288f543bdbf 00:21:34.881 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 85aab159-ce67-44f9-bc7e-9288f543bdbf 00:21:34.881 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=85aab159-ce67-44f9-bc7e-9288f543bdbf 00:21:34.881 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:21:34.881 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:21:34.881 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:21:34.881 08:59:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:35.139 08:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:21:35.139 { 00:21:35.139 "uuid": "d77bd1cc-9b96-42d9-87d7-60eaabfc4fe3", 00:21:35.139 "name": "lvs_0", 00:21:35.139 "base_bdev": "Nvme0n1", 00:21:35.139 "total_data_clusters": 1278, 00:21:35.139 "free_clusters": 0, 00:21:35.139 "block_size": 4096, 00:21:35.139 "cluster_size": 4194304 00:21:35.139 }, 00:21:35.139 { 00:21:35.139 "uuid": "85aab159-ce67-44f9-bc7e-9288f543bdbf", 00:21:35.139 "name": "lvs_n_0", 00:21:35.139 "base_bdev": "d57fdc95-228c-4826-8fd0-dffb7211b636", 00:21:35.140 "total_data_clusters": 1276, 00:21:35.140 "free_clusters": 1276, 00:21:35.140 "block_size": 4096, 00:21:35.140 "cluster_size": 4194304 00:21:35.140 } 00:21:35.140 ]' 00:21:35.140 08:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="85aab159-ce67-44f9-bc7e-9288f543bdbf") .free_clusters' 00:21:35.398 08:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:21:35.398 08:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="85aab159-ce67-44f9-bc7e-9288f543bdbf") .cluster_size' 00:21:35.398 08:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:21:35.398 08:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:21:35.398 5104 00:21:35.398 08:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:21:35.398 08:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:21:35.398 08:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 85aab159-ce67-44f9-bc7e-9288f543bdbf lbd_nest_0 5104 00:21:35.657 08:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=6bf04f9c-3ecb-445f-9ff8-4d0291c0c182 00:21:35.657 08:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:35.916 08:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:21:35.916 08:59:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 6bf04f9c-3ecb-445f-9ff8-4d0291c0c182 00:21:36.175 08:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:36.434 08:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:21:36.434 08:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:21:36.434 08:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:36.434 08:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:36.434 08:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:36.694 Initializing NVMe Controllers 00:21:36.694 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:36.694 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:36.694 WARNING: Some requested NVMe devices were skipped 00:21:36.694 No valid NVMe controllers or AIO or URING devices found 00:21:36.694 08:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:36.694 08:59:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:48.902 Initializing NVMe Controllers 00:21:48.902 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:48.902 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:48.902 Initialization complete. Launching workers. 00:21:48.902 ======================================================== 00:21:48.902 Latency(us) 00:21:48.902 Device Information : IOPS MiB/s Average min max 00:21:48.902 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 840.02 105.00 1188.41 392.00 8679.45 00:21:48.902 ======================================================== 00:21:48.902 Total : 840.02 105.00 1188.41 392.00 8679.45 00:21:48.902 00:21:48.902 08:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:48.902 08:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:48.902 08:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:48.902 Initializing NVMe Controllers 00:21:48.902 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:48.902 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:48.902 WARNING: Some requested NVMe devices were skipped 00:21:48.902 No valid NVMe controllers or AIO or URING devices found 00:21:48.902 08:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:48.902 08:59:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:58.934 Initializing NVMe Controllers 00:21:58.934 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:58.934 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:58.934 Initialization complete. Launching workers. 00:21:58.934 ======================================================== 00:21:58.934 Latency(us) 00:21:58.934 Device Information : IOPS MiB/s Average min max 00:21:58.934 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1341.93 167.74 23858.53 6348.49 67767.81 00:21:58.934 ======================================================== 00:21:58.934 Total : 1341.93 167.74 23858.53 6348.49 67767.81 00:21:58.934 00:21:58.934 08:59:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:58.934 08:59:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:58.934 08:59:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:58.934 Initializing NVMe Controllers 00:21:58.934 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:58.934 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:58.934 WARNING: Some requested NVMe devices were skipped 00:21:58.934 No valid NVMe controllers or AIO or URING devices found 00:21:58.934 08:59:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:58.934 08:59:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:22:08.912 Initializing NVMe Controllers 00:22:08.912 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:22:08.912 Controller IO queue size 128, less than required. 00:22:08.912 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:08.912 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:08.912 Initialization complete. Launching workers. 00:22:08.912 ======================================================== 00:22:08.912 Latency(us) 00:22:08.912 Device Information : IOPS MiB/s Average min max 00:22:08.912 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3612.59 451.57 35493.57 7963.87 86534.46 00:22:08.912 ======================================================== 00:22:08.912 Total : 3612.59 451.57 35493.57 7963.87 86534.46 00:22:08.912 00:22:08.912 08:59:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:09.171 08:59:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6bf04f9c-3ecb-445f-9ff8-4d0291c0c182 00:22:09.429 08:59:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:22:09.687 08:59:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d57fdc95-228c-4826-8fd0-dffb7211b636 00:22:09.945 08:59:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:10.204 rmmod nvme_tcp 00:22:10.204 rmmod nvme_fabrics 00:22:10.204 rmmod nvme_keyring 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 79663 ']' 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 79663 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 79663 ']' 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 79663 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:10.204 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79663 00:22:10.463 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:10.463 killing process with pid 79663 00:22:10.463 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:10.463 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79663' 00:22:10.463 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 79663 00:22:10.463 08:59:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 79663 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:22:12.995 00:22:12.995 real 0m53.788s 00:22:12.995 user 3m22.803s 00:22:12.995 sys 0m12.414s 00:22:12.995 ************************************ 00:22:12.995 END TEST nvmf_perf 00:22:12.995 ************************************ 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:12.995 ************************************ 00:22:12.995 START TEST nvmf_fio_host 00:22:12.995 ************************************ 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:12.995 * Looking for test storage... 00:22:12.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:12.995 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:12.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.996 --rc genhtml_branch_coverage=1 00:22:12.996 --rc genhtml_function_coverage=1 00:22:12.996 --rc genhtml_legend=1 00:22:12.996 --rc geninfo_all_blocks=1 00:22:12.996 --rc geninfo_unexecuted_blocks=1 00:22:12.996 00:22:12.996 ' 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:12.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.996 --rc genhtml_branch_coverage=1 00:22:12.996 --rc genhtml_function_coverage=1 00:22:12.996 --rc genhtml_legend=1 00:22:12.996 --rc geninfo_all_blocks=1 00:22:12.996 --rc geninfo_unexecuted_blocks=1 00:22:12.996 00:22:12.996 ' 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:12.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.996 --rc genhtml_branch_coverage=1 00:22:12.996 --rc genhtml_function_coverage=1 00:22:12.996 --rc genhtml_legend=1 00:22:12.996 --rc geninfo_all_blocks=1 00:22:12.996 --rc geninfo_unexecuted_blocks=1 00:22:12.996 00:22:12.996 ' 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:12.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.996 --rc genhtml_branch_coverage=1 00:22:12.996 --rc genhtml_function_coverage=1 00:22:12.996 --rc genhtml_legend=1 00:22:12.996 --rc geninfo_all_blocks=1 00:22:12.996 --rc geninfo_unexecuted_blocks=1 00:22:12.996 00:22:12.996 ' 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:12.996 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:12.996 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:12.997 Cannot find device "nvmf_init_br" 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:12.997 Cannot find device "nvmf_init_br2" 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:12.997 Cannot find device "nvmf_tgt_br" 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:12.997 Cannot find device "nvmf_tgt_br2" 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:12.997 Cannot find device "nvmf_init_br" 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:12.997 Cannot find device "nvmf_init_br2" 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:22:12.997 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:12.997 Cannot find device "nvmf_tgt_br" 00:22:13.256 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:22:13.256 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:13.256 Cannot find device "nvmf_tgt_br2" 00:22:13.256 08:59:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:13.256 Cannot find device "nvmf_br" 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:13.256 Cannot find device "nvmf_init_if" 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:13.256 Cannot find device "nvmf_init_if2" 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:13.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:13.256 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:13.256 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:13.515 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:13.515 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:22:13.515 00:22:13.515 --- 10.0.0.3 ping statistics --- 00:22:13.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.515 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:13.515 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:13.515 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:22:13.515 00:22:13.515 --- 10.0.0.4 ping statistics --- 00:22:13.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.515 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:13.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:22:13.515 00:22:13.515 --- 10.0.0.1 ping statistics --- 00:22:13.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.515 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:13.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:22:13.515 00:22:13.515 --- 10.0.0.2 ping statistics --- 00:22:13.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.515 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=80566 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 80566 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 80566 ']' 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:13.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:13.515 08:59:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.515 [2024-09-28 08:59:51.418669] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:13.515 [2024-09-28 08:59:51.418852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.774 [2024-09-28 08:59:51.591685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:13.774 [2024-09-28 08:59:51.741353] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.774 [2024-09-28 08:59:51.741640] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.774 [2024-09-28 08:59:51.741838] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.774 [2024-09-28 08:59:51.742083] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.774 [2024-09-28 08:59:51.742132] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.774 [2024-09-28 08:59:51.742391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.774 [2024-09-28 08:59:51.742519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.774 [2024-09-28 08:59:51.742574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.774 [2024-09-28 08:59:51.742595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:14.033 [2024-09-28 08:59:51.902226] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:14.601 08:59:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:14.601 08:59:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:22:14.601 08:59:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:14.860 [2024-09-28 08:59:52.620540] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.860 08:59:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:14.860 08:59:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:14.860 08:59:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.860 08:59:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:15.119 Malloc1 00:22:15.119 08:59:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:15.378 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:15.637 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:15.896 [2024-09-28 08:59:53.672368] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:15.896 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:16.155 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:16.156 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:22:16.156 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:16.156 08:59:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:16.415 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:16.415 fio-3.35 00:22:16.415 Starting 1 thread 00:22:19.078 00:22:19.078 test: (groupid=0, jobs=1): err= 0: pid=80637: Sat Sep 28 08:59:56 2024 00:22:19.078 read: IOPS=7476, BW=29.2MiB/s (30.6MB/s)(58.6MiB/2008msec) 00:22:19.078 slat (usec): min=2, max=197, avg= 2.78, stdev= 2.63 00:22:19.078 clat (usec): min=2028, max=15813, avg=8892.21, stdev=717.35 00:22:19.078 lat (usec): min=2056, max=15816, avg=8894.99, stdev=717.29 00:22:19.078 clat percentiles (usec): 00:22:19.078 | 1.00th=[ 7570], 5.00th=[ 7963], 10.00th=[ 8094], 20.00th=[ 8356], 00:22:19.078 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:22:19.078 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10028], 00:22:19.078 | 99.00th=[10814], 99.50th=[11207], 99.90th=[14091], 99.95th=[14746], 00:22:19.078 | 99.99th=[15664] 00:22:19.078 bw ( KiB/s): min=27984, max=30824, per=99.99%, avg=29904.00, stdev=1305.89, samples=4 00:22:19.078 iops : min= 6996, max= 7706, avg=7476.00, stdev=326.47, samples=4 00:22:19.078 write: IOPS=7471, BW=29.2MiB/s (30.6MB/s)(58.6MiB/2008msec); 0 zone resets 00:22:19.078 slat (usec): min=2, max=109, avg= 2.90, stdev= 1.88 00:22:19.078 clat (usec): min=1337, max=15782, avg=8119.41, stdev=689.74 00:22:19.078 lat (usec): min=1345, max=15785, avg=8122.31, stdev=689.75 00:22:19.078 clat percentiles (usec): 00:22:19.078 | 1.00th=[ 6915], 5.00th=[ 7242], 10.00th=[ 7439], 20.00th=[ 7635], 00:22:19.078 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8029], 60.00th=[ 8160], 00:22:19.078 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 9241], 00:22:19.078 | 99.00th=[ 9896], 99.50th=[10421], 99.90th=[14091], 99.95th=[14877], 00:22:19.078 | 99.99th=[15795] 00:22:19.078 bw ( KiB/s): min=29000, max=31064, per=99.96%, avg=29874.00, stdev=951.15, samples=4 00:22:19.078 iops : min= 7250, max= 7766, avg=7468.50, stdev=237.79, samples=4 00:22:19.078 lat (msec) : 2=0.01%, 4=0.10%, 10=96.70%, 20=3.20% 00:22:19.078 cpu : usr=73.44%, sys=20.28%, ctx=3, majf=0, minf=1554 00:22:19.078 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:19.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:19.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:19.078 issued rwts: total=15013,15003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:19.078 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:19.078 00:22:19.078 Run status group 0 (all jobs): 00:22:19.078 READ: bw=29.2MiB/s (30.6MB/s), 29.2MiB/s-29.2MiB/s (30.6MB/s-30.6MB/s), io=58.6MiB (61.5MB), run=2008-2008msec 00:22:19.078 WRITE: bw=29.2MiB/s (30.6MB/s), 29.2MiB/s-29.2MiB/s (30.6MB/s-30.6MB/s), io=58.6MiB (61.5MB), run=2008-2008msec 00:22:19.078 ----------------------------------------------------- 00:22:19.078 Suppressions used: 00:22:19.078 count bytes template 00:22:19.078 1 57 /usr/src/fio/parse.c 00:22:19.078 1 8 libtcmalloc_minimal.so 00:22:19.078 ----------------------------------------------------- 00:22:19.078 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:19.078 08:59:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:22:19.078 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:19.078 fio-3.35 00:22:19.078 Starting 1 thread 00:22:21.612 00:22:21.612 test: (groupid=0, jobs=1): err= 0: pid=80684: Sat Sep 28 08:59:59 2024 00:22:21.612 read: IOPS=7366, BW=115MiB/s (121MB/s)(231MiB/2010msec) 00:22:21.612 slat (usec): min=3, max=133, avg= 4.18, stdev= 2.51 00:22:21.612 clat (usec): min=2999, max=19469, avg=9899.87, stdev=2717.30 00:22:21.612 lat (usec): min=3002, max=19473, avg=9904.04, stdev=2717.31 00:22:21.612 clat percentiles (usec): 00:22:21.612 | 1.00th=[ 5080], 5.00th=[ 5866], 10.00th=[ 6521], 20.00th=[ 7504], 00:22:21.612 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10421], 00:22:21.612 | 70.00th=[11338], 80.00th=[12125], 90.00th=[13566], 95.00th=[14746], 00:22:21.612 | 99.00th=[17171], 99.50th=[17433], 99.90th=[19006], 99.95th=[19268], 00:22:21.612 | 99.99th=[19530] 00:22:21.612 bw ( KiB/s): min=49760, max=67104, per=49.73%, avg=58616.00, stdev=7795.02, samples=4 00:22:21.612 iops : min= 3110, max= 4194, avg=3663.50, stdev=487.19, samples=4 00:22:21.612 write: IOPS=4185, BW=65.4MiB/s (68.6MB/s)(120MiB/1831msec); 0 zone resets 00:22:21.612 slat (usec): min=32, max=204, avg=37.00, stdev= 8.53 00:22:21.612 clat (usec): min=5567, max=24902, avg=13514.36, stdev=2372.61 00:22:21.612 lat (usec): min=5631, max=24935, avg=13551.36, stdev=2372.36 00:22:21.612 clat percentiles (usec): 00:22:21.612 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[10683], 20.00th=[11469], 00:22:21.612 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13304], 60.00th=[13829], 00:22:21.612 | 70.00th=[14615], 80.00th=[15533], 90.00th=[16581], 95.00th=[17695], 00:22:21.612 | 99.00th=[19792], 99.50th=[20317], 99.90th=[22676], 99.95th=[24773], 00:22:21.612 | 99.99th=[24773] 00:22:21.612 bw ( KiB/s): min=52480, max=69632, per=91.19%, avg=61064.00, stdev=7699.10, samples=4 00:22:21.612 iops : min= 3280, max= 4352, avg=3816.50, stdev=481.19, samples=4 00:22:21.612 lat (msec) : 4=0.08%, 10=37.73%, 20=61.94%, 50=0.26% 00:22:21.612 cpu : usr=80.95%, sys=15.12%, ctx=4, majf=0, minf=2211 00:22:21.612 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:21.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.612 issued rwts: total=14807,7663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.612 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.612 00:22:21.612 Run status group 0 (all jobs): 00:22:21.612 READ: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=231MiB (243MB), run=2010-2010msec 00:22:21.612 WRITE: bw=65.4MiB/s (68.6MB/s), 65.4MiB/s-65.4MiB/s (68.6MB/s-68.6MB/s), io=120MiB (126MB), run=1831-1831msec 00:22:21.871 ----------------------------------------------------- 00:22:21.871 Suppressions used: 00:22:21.871 count bytes template 00:22:21.871 1 57 /usr/src/fio/parse.c 00:22:21.871 216 20736 /usr/src/fio/iolog.c 00:22:21.871 1 8 libtcmalloc_minimal.so 00:22:21.871 ----------------------------------------------------- 00:22:21.871 00:22:21.871 08:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:22.131 08:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:22:22.131 08:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:22:22.131 08:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:22:22.131 08:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:22:22.131 08:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:22:22.131 08:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:22:22.131 08:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:22:22.131 08:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:22:22.131 08:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:22:22.131 08:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:22:22.131 08:59:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:22:22.390 Nvme0n1 00:22:22.390 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:22:22.650 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=d2b381f1-5fc3-47de-82e8-a2df39f28c77 00:22:22.650 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb d2b381f1-5fc3-47de-82e8-a2df39f28c77 00:22:22.650 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=d2b381f1-5fc3-47de-82e8-a2df39f28c77 00:22:22.650 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:22:22.650 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:22:22.650 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:22:22.650 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:22.908 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:22:22.908 { 00:22:22.908 "uuid": "d2b381f1-5fc3-47de-82e8-a2df39f28c77", 00:22:22.908 "name": "lvs_0", 00:22:22.908 "base_bdev": "Nvme0n1", 00:22:22.908 "total_data_clusters": 4, 00:22:22.908 "free_clusters": 4, 00:22:22.908 "block_size": 4096, 00:22:22.908 "cluster_size": 1073741824 00:22:22.908 } 00:22:22.908 ]' 00:22:22.908 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d2b381f1-5fc3-47de-82e8-a2df39f28c77") .free_clusters' 00:22:22.908 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:22:22.908 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d2b381f1-5fc3-47de-82e8-a2df39f28c77") .cluster_size' 00:22:23.167 4096 00:22:23.167 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:22:23.167 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:22:23.167 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:22:23.167 09:00:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:22:23.425 2cdc346a-8a7f-4459-a066-3e952467a983 00:22:23.425 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:22:23.683 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:22:23.942 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:22:24.201 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:24.201 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:24.201 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:24.201 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:24.201 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:24.201 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:24.201 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:24.201 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:24.201 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:24.201 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:24.201 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:24.201 09:00:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:24.201 09:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:24.201 09:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:24.201 09:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:22:24.201 09:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:24.201 09:00:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:24.201 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:24.201 fio-3.35 00:22:24.201 Starting 1 thread 00:22:26.736 00:22:26.736 test: (groupid=0, jobs=1): err= 0: pid=80787: Sat Sep 28 09:00:04 2024 00:22:26.736 read: IOPS=4928, BW=19.3MiB/s (20.2MB/s)(38.7MiB/2011msec) 00:22:26.736 slat (usec): min=2, max=299, avg= 3.54, stdev= 4.40 00:22:26.736 clat (usec): min=3642, max=25082, avg=13530.19, stdev=1128.46 00:22:26.736 lat (usec): min=3652, max=25086, avg=13533.73, stdev=1128.08 00:22:26.736 clat percentiles (usec): 00:22:26.736 | 1.00th=[11207], 5.00th=[11994], 10.00th=[12256], 20.00th=[12649], 00:22:26.736 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13698], 00:22:26.736 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:22:26.736 | 99.00th=[16057], 99.50th=[16712], 99.90th=[21627], 99.95th=[23200], 00:22:26.736 | 99.99th=[25035] 00:22:26.736 bw ( KiB/s): min=18544, max=20160, per=99.91%, avg=19698.00, stdev=773.27, samples=4 00:22:26.736 iops : min= 4636, max= 5040, avg=4924.50, stdev=193.32, samples=4 00:22:26.736 write: IOPS=4921, BW=19.2MiB/s (20.2MB/s)(38.7MiB/2011msec); 0 zone resets 00:22:26.736 slat (usec): min=2, max=152, avg= 3.69, stdev= 3.14 00:22:26.736 clat (usec): min=2466, max=23258, avg=12289.03, stdev=1068.72 00:22:26.736 lat (usec): min=2486, max=23261, avg=12292.72, stdev=1068.64 00:22:26.736 clat percentiles (usec): 00:22:26.736 | 1.00th=[10159], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:22:26.736 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:22:26.736 | 70.00th=[12780], 80.00th=[13042], 90.00th=[13435], 95.00th=[13829], 00:22:26.736 | 99.00th=[14746], 99.50th=[15270], 99.90th=[21365], 99.95th=[21890], 00:22:26.736 | 99.99th=[23200] 00:22:26.736 bw ( KiB/s): min=19520, max=19904, per=99.90%, avg=19666.00, stdev=181.77, samples=4 00:22:26.736 iops : min= 4880, max= 4976, avg=4916.50, stdev=45.44, samples=4 00:22:26.736 lat (msec) : 4=0.05%, 10=0.48%, 20=99.29%, 50=0.18% 00:22:26.736 cpu : usr=73.18%, sys=21.54%, ctx=5, majf=0, minf=1553 00:22:26.736 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:26.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:26.736 issued rwts: total=9912,9897,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.736 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:26.736 00:22:26.736 Run status group 0 (all jobs): 00:22:26.736 READ: bw=19.3MiB/s (20.2MB/s), 19.3MiB/s-19.3MiB/s (20.2MB/s-20.2MB/s), io=38.7MiB (40.6MB), run=2011-2011msec 00:22:26.736 WRITE: bw=19.2MiB/s (20.2MB/s), 19.2MiB/s-19.2MiB/s (20.2MB/s-20.2MB/s), io=38.7MiB (40.5MB), run=2011-2011msec 00:22:26.994 ----------------------------------------------------- 00:22:26.994 Suppressions used: 00:22:26.994 count bytes template 00:22:26.994 1 58 /usr/src/fio/parse.c 00:22:26.994 1 8 libtcmalloc_minimal.so 00:22:26.994 ----------------------------------------------------- 00:22:26.994 00:22:26.994 09:00:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:27.253 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:22:27.512 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=41d2a743-3b00-4b92-ab1d-ce9a0a0a9372 00:22:27.512 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 41d2a743-3b00-4b92-ab1d-ce9a0a0a9372 00:22:27.512 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=41d2a743-3b00-4b92-ab1d-ce9a0a0a9372 00:22:27.512 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:22:27.512 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:22:27.512 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:22:27.512 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:27.771 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:22:27.771 { 00:22:27.771 "uuid": "d2b381f1-5fc3-47de-82e8-a2df39f28c77", 00:22:27.771 "name": "lvs_0", 00:22:27.771 "base_bdev": "Nvme0n1", 00:22:27.771 "total_data_clusters": 4, 00:22:27.771 "free_clusters": 0, 00:22:27.771 "block_size": 4096, 00:22:27.771 "cluster_size": 1073741824 00:22:27.771 }, 00:22:27.771 { 00:22:27.771 "uuid": "41d2a743-3b00-4b92-ab1d-ce9a0a0a9372", 00:22:27.771 "name": "lvs_n_0", 00:22:27.771 "base_bdev": "2cdc346a-8a7f-4459-a066-3e952467a983", 00:22:27.771 "total_data_clusters": 1022, 00:22:27.771 "free_clusters": 1022, 00:22:27.771 "block_size": 4096, 00:22:27.771 "cluster_size": 4194304 00:22:27.771 } 00:22:27.771 ]' 00:22:27.771 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="41d2a743-3b00-4b92-ab1d-ce9a0a0a9372") .free_clusters' 00:22:27.771 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:22:27.771 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="41d2a743-3b00-4b92-ab1d-ce9a0a0a9372") .cluster_size' 00:22:27.771 4088 00:22:27.771 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:22:27.771 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:22:27.771 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:22:27.771 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:22:28.029 7b1785e4-ffdc-4966-817d-524ef83cec8f 00:22:28.029 09:00:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:22:28.287 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:22:28.546 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # break 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:28.805 09:00:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:29.064 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:29.064 fio-3.35 00:22:29.064 Starting 1 thread 00:22:31.599 00:22:31.599 test: (groupid=0, jobs=1): err= 0: pid=80863: Sat Sep 28 09:00:09 2024 00:22:31.599 read: IOPS=4517, BW=17.6MiB/s (18.5MB/s)(35.5MiB/2012msec) 00:22:31.599 slat (usec): min=2, max=357, avg= 4.12, stdev= 5.45 00:22:31.599 clat (usec): min=4105, max=26337, avg=14804.42, stdev=1254.11 00:22:31.599 lat (usec): min=4116, max=26340, avg=14808.54, stdev=1253.62 00:22:31.599 clat percentiles (usec): 00:22:31.599 | 1.00th=[12256], 5.00th=[13042], 10.00th=[13435], 20.00th=[13829], 00:22:31.599 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:22:31.599 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:22:31.599 | 99.00th=[17433], 99.50th=[18220], 99.90th=[24249], 99.95th=[25560], 00:22:31.599 | 99.99th=[26346] 00:22:31.599 bw ( KiB/s): min=17328, max=18424, per=99.98%, avg=18066.00, stdev=498.72, samples=4 00:22:31.599 iops : min= 4332, max= 4606, avg=4516.50, stdev=124.68, samples=4 00:22:31.599 write: IOPS=4524, BW=17.7MiB/s (18.5MB/s)(35.6MiB/2012msec); 0 zone resets 00:22:31.599 slat (usec): min=3, max=286, avg= 4.29, stdev= 4.20 00:22:31.599 clat (usec): min=3030, max=25344, avg=13421.36, stdev=1233.54 00:22:31.599 lat (usec): min=3048, max=25347, avg=13425.66, stdev=1233.31 00:22:31.599 clat percentiles (usec): 00:22:31.599 | 1.00th=[10814], 5.00th=[11731], 10.00th=[12125], 20.00th=[12518], 00:22:31.599 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:22:31.599 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14746], 95.00th=[15139], 00:22:31.599 | 99.00th=[16057], 99.50th=[17433], 99.90th=[23987], 99.95th=[25035], 00:22:31.599 | 99.99th=[25297] 00:22:31.599 bw ( KiB/s): min=17928, max=18312, per=99.82%, avg=18066.00, stdev=170.25, samples=4 00:22:31.599 iops : min= 4482, max= 4578, avg=4516.50, stdev=42.56, samples=4 00:22:31.599 lat (msec) : 4=0.01%, 10=0.40%, 20=99.32%, 50=0.28% 00:22:31.599 cpu : usr=76.08%, sys=18.85%, ctx=2, majf=0, minf=1553 00:22:31.599 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:31.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:31.599 issued rwts: total=9089,9104,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:31.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:31.599 00:22:31.599 Run status group 0 (all jobs): 00:22:31.599 READ: bw=17.6MiB/s (18.5MB/s), 17.6MiB/s-17.6MiB/s (18.5MB/s-18.5MB/s), io=35.5MiB (37.2MB), run=2012-2012msec 00:22:31.599 WRITE: bw=17.7MiB/s (18.5MB/s), 17.7MiB/s-17.7MiB/s (18.5MB/s-18.5MB/s), io=35.6MiB (37.3MB), run=2012-2012msec 00:22:31.599 ----------------------------------------------------- 00:22:31.599 Suppressions used: 00:22:31.599 count bytes template 00:22:31.599 1 58 /usr/src/fio/parse.c 00:22:31.599 1 8 libtcmalloc_minimal.so 00:22:31.599 ----------------------------------------------------- 00:22:31.599 00:22:31.599 09:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:31.857 09:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:22:31.857 09:00:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:22:32.424 09:00:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:22:32.424 09:00:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:22:32.683 09:00:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:22:33.250 09:00:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:33.817 rmmod nvme_tcp 00:22:33.817 rmmod nvme_fabrics 00:22:33.817 rmmod nvme_keyring 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 80566 ']' 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 80566 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 80566 ']' 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 80566 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80566 00:22:33.817 killing process with pid 80566 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80566' 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 80566 00:22:33.817 09:00:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 80566 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:35.193 09:00:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:22:35.193 00:22:35.193 real 0m22.363s 00:22:35.193 user 1m35.741s 00:22:35.193 sys 0m4.916s 00:22:35.193 ************************************ 00:22:35.193 END TEST nvmf_fio_host 00:22:35.193 ************************************ 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.193 ************************************ 00:22:35.193 START TEST nvmf_failover 00:22:35.193 ************************************ 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:35.193 * Looking for test storage... 00:22:35.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:22:35.193 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:35.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.453 --rc genhtml_branch_coverage=1 00:22:35.453 --rc genhtml_function_coverage=1 00:22:35.453 --rc genhtml_legend=1 00:22:35.453 --rc geninfo_all_blocks=1 00:22:35.453 --rc geninfo_unexecuted_blocks=1 00:22:35.453 00:22:35.453 ' 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:35.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.453 --rc genhtml_branch_coverage=1 00:22:35.453 --rc genhtml_function_coverage=1 00:22:35.453 --rc genhtml_legend=1 00:22:35.453 --rc geninfo_all_blocks=1 00:22:35.453 --rc geninfo_unexecuted_blocks=1 00:22:35.453 00:22:35.453 ' 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:35.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.453 --rc genhtml_branch_coverage=1 00:22:35.453 --rc genhtml_function_coverage=1 00:22:35.453 --rc genhtml_legend=1 00:22:35.453 --rc geninfo_all_blocks=1 00:22:35.453 --rc geninfo_unexecuted_blocks=1 00:22:35.453 00:22:35.453 ' 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:35.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.453 --rc genhtml_branch_coverage=1 00:22:35.453 --rc genhtml_function_coverage=1 00:22:35.453 --rc genhtml_legend=1 00:22:35.453 --rc geninfo_all_blocks=1 00:22:35.453 --rc geninfo_unexecuted_blocks=1 00:22:35.453 00:22:35.453 ' 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.453 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:35.453 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:35.454 Cannot find device "nvmf_init_br" 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:35.454 Cannot find device "nvmf_init_br2" 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:35.454 Cannot find device "nvmf_tgt_br" 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:35.454 Cannot find device "nvmf_tgt_br2" 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:35.454 Cannot find device "nvmf_init_br" 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:35.454 Cannot find device "nvmf_init_br2" 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:35.454 Cannot find device "nvmf_tgt_br" 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:35.454 Cannot find device "nvmf_tgt_br2" 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:35.454 Cannot find device "nvmf_br" 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:35.454 Cannot find device "nvmf_init_if" 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:35.454 Cannot find device "nvmf_init_if2" 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:35.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:22:35.454 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:35.713 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:35.713 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:35.713 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:22:35.713 00:22:35.713 --- 10.0.0.3 ping statistics --- 00:22:35.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.713 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:35.713 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:35.713 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:22:35.713 00:22:35.713 --- 10.0.0.4 ping statistics --- 00:22:35.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.713 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:35.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:35.713 00:22:35.713 --- 10.0.0.1 ping statistics --- 00:22:35.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.713 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:35.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:22:35.713 00:22:35.713 --- 10.0.0.2 ping statistics --- 00:22:35.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.713 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:35.713 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:35.971 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:35.971 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:35.971 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.971 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:35.972 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=81172 00:22:35.972 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:35.972 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 81172 00:22:35.972 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 81172 ']' 00:22:35.972 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.972 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:35.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.972 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.972 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:35.972 09:00:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:35.972 [2024-09-28 09:00:13.841025] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:35.972 [2024-09-28 09:00:13.842120] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.230 [2024-09-28 09:00:14.024543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:36.489 [2024-09-28 09:00:14.251783] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.489 [2024-09-28 09:00:14.251856] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.489 [2024-09-28 09:00:14.251897] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.489 [2024-09-28 09:00:14.251908] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.489 [2024-09-28 09:00:14.251919] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.489 [2024-09-28 09:00:14.252702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.489 [2024-09-28 09:00:14.252917] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.489 [2024-09-28 09:00:14.252937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.489 [2024-09-28 09:00:14.413263] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:37.056 09:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:37.056 09:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:37.056 09:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:37.056 09:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:37.056 09:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:37.056 09:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.056 09:00:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:37.314 [2024-09-28 09:00:15.084138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.314 09:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:37.573 Malloc0 00:22:37.573 09:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:37.831 09:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:38.089 09:00:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:38.348 [2024-09-28 09:00:16.124293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:38.348 09:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:38.348 [2024-09-28 09:00:16.340403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:38.607 09:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:38.607 [2024-09-28 09:00:16.572556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:22:38.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.607 09:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=81230 00:22:38.607 09:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:38.607 09:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:38.607 09:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 81230 /var/tmp/bdevperf.sock 00:22:38.607 09:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 81230 ']' 00:22:38.607 09:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.607 09:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.607 09:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.607 09:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.607 09:00:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:39.986 09:00:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:39.986 09:00:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:39.986 09:00:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:39.986 NVMe0n1 00:22:39.986 09:00:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:40.245 00:22:40.504 09:00:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=81258 00:22:40.504 09:00:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:40.504 09:00:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:41.441 09:00:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:41.700 09:00:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:44.988 09:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:44.988 00:22:44.988 09:00:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:45.248 09:00:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:48.550 09:00:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:48.550 [2024-09-28 09:00:26.413454] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:48.550 09:00:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:49.522 09:00:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:49.782 09:00:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 81258 00:22:56.349 { 00:22:56.349 "results": [ 00:22:56.349 { 00:22:56.350 "job": "NVMe0n1", 00:22:56.350 "core_mask": "0x1", 00:22:56.350 "workload": "verify", 00:22:56.350 "status": "finished", 00:22:56.350 "verify_range": { 00:22:56.350 "start": 0, 00:22:56.350 "length": 16384 00:22:56.350 }, 00:22:56.350 "queue_depth": 128, 00:22:56.350 "io_size": 4096, 00:22:56.350 "runtime": 15.008532, 00:22:56.350 "iops": 8147.965437259287, 00:22:56.350 "mibps": 31.82798998929409, 00:22:56.350 "io_failed": 3389, 00:22:56.350 "io_timeout": 0, 00:22:56.350 "avg_latency_us": 15254.77731842848, 00:22:56.350 "min_latency_us": 655.36, 00:22:56.350 "max_latency_us": 17635.14181818182 00:22:56.350 } 00:22:56.350 ], 00:22:56.350 "core_count": 1 00:22:56.350 } 00:22:56.350 09:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 81230 00:22:56.350 09:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 81230 ']' 00:22:56.350 09:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 81230 00:22:56.350 09:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:22:56.350 09:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:56.350 09:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81230 00:22:56.350 09:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:56.350 09:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:56.350 09:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81230' 00:22:56.350 killing process with pid 81230 00:22:56.350 09:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 81230 00:22:56.350 09:00:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 81230 00:22:56.616 09:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:56.616 [2024-09-28 09:00:16.678141] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:56.616 [2024-09-28 09:00:16.678300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81230 ] 00:22:56.616 [2024-09-28 09:00:16.839959] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.616 [2024-09-28 09:00:17.042540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.616 [2024-09-28 09:00:17.204503] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:56.616 Running I/O for 15 seconds... 00:22:56.616 6420.00 IOPS, 25.08 MiB/s [2024-09-28 09:00:19.529485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.616 [2024-09-28 09:00:19.529592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.529622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.616 [2024-09-28 09:00:19.529645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.529666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.616 [2024-09-28 09:00:19.529687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.529706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.616 [2024-09-28 09:00:19.529726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.529745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:22:56.616 [2024-09-28 09:00:19.530051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.616 [2024-09-28 09:00:19.530086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.616 [2024-09-28 09:00:19.530933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.616 [2024-09-28 09:00:19.530958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.530988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.531968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.531988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.617 [2024-09-28 09:00:19.532870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.617 [2024-09-28 09:00:19.532895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.532917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.532943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.532964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.532989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.533977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.533997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.618 [2024-09-28 09:00:19.534718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.618 [2024-09-28 09:00:19.534742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:19.534762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.534784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:19.534831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.534859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:19.534880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.534904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:19.534924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.534964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:19.534985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:19.535029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:19.535075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:19.535118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:19.535164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:19.535233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:19.535275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:19.535319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:19.535361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.535404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.535450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.535492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.535537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.535580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.535623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.535666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.535708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.535757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.535804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.535855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.535897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.535936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.535975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.535995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:19.536013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.536033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:19.536051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.536071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:22:56.619 [2024-09-28 09:00:19.536095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.619 [2024-09-28 09:00:19.536111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.619 [2024-09-28 09:00:19.536128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60568 len:8 PRP1 0x0 PRP2 0x0 00:22:56.619 [2024-09-28 09:00:19.536146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:19.536386] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b780 was disconnected and freed. reset controller. 00:22:56.619 [2024-09-28 09:00:19.536434] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:56.619 [2024-09-28 09:00:19.536474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:56.619 [2024-09-28 09:00:19.540275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:56.619 [2024-09-28 09:00:19.540332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:56.619 [2024-09-28 09:00:19.573797] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:56.619 7022.50 IOPS, 27.43 MiB/s 7504.33 IOPS, 29.31 MiB/s 7733.25 IOPS, 30.21 MiB/s [2024-09-28 09:00:23.164248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.619 [2024-09-28 09:00:23.164356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:23.164398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:23.164428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:23.164450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:23.164468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:23.164488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:23.164507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:23.164526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:23.164544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.619 [2024-09-28 09:00:23.164563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.619 [2024-09-28 09:00:23.164582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.164601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.164619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.164639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.164657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.164676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.164693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.164713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.164730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.164749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.164767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.164813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.164866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.164890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.164911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.164975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.165000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.165043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.165086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.165143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.165211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.165250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.165288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.165327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.165366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.165404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.165460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.620 [2024-09-28 09:00:23.165498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.165545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.165586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.165624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.165662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.165699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.165755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.165794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.165848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.165902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.165943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.165963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.165982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.166002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.166021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.166041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.166060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.166080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.166109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.620 [2024-09-28 09:00:23.166130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.620 [2024-09-28 09:00:23.166155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.166193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.166232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.166286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.166324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.166362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.166400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.166437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.166474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.166512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.166549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.166587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.166633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.166672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.166710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.166748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.166786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.166851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.166892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.166948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.166970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.166989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.167028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.167068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.167108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.167156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.167228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.167286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.167351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.167393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.167436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.167479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.621 [2024-09-28 09:00:23.167521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.167563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.167605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.167661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.167715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.167769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.167833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.167875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.621 [2024-09-28 09:00:23.167915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.621 [2024-09-28 09:00:23.167946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.167968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.167989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.168008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.168048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.168088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.168130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.168170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.168210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.168250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.622 [2024-09-28 09:00:23.168289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.622 [2024-09-28 09:00:23.168336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.622 [2024-09-28 09:00:23.168378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.622 [2024-09-28 09:00:23.168418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.622 [2024-09-28 09:00:23.168471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.622 [2024-09-28 09:00:23.168509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.622 [2024-09-28 09:00:23.168548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.622 [2024-09-28 09:00:23.168601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.622 [2024-09-28 09:00:23.168641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.622 [2024-09-28 09:00:23.168679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.168718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.168757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.168851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.168896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.168948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.168971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.169000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.169021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.622 [2024-09-28 09:00:23.169040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.169061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(6) to be set 00:22:56.622 [2024-09-28 09:00:23.169086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.622 [2024-09-28 09:00:23.169102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.622 [2024-09-28 09:00:23.169118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47600 len:8 PRP1 0x0 PRP2 0x0 00:22:56.622 [2024-09-28 09:00:23.169137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.169156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.622 [2024-09-28 09:00:23.169200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.622 [2024-09-28 09:00:23.169214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48056 len:8 PRP1 0x0 PRP2 0x0 00:22:56.622 [2024-09-28 09:00:23.169231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.169248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.622 [2024-09-28 09:00:23.169261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.622 [2024-09-28 09:00:23.169275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48064 len:8 PRP1 0x0 PRP2 0x0 00:22:56.622 [2024-09-28 09:00:23.169292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.169309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.622 [2024-09-28 09:00:23.169323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.622 [2024-09-28 09:00:23.169336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48072 len:8 PRP1 0x0 PRP2 0x0 00:22:56.622 [2024-09-28 09:00:23.169353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.169370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.622 [2024-09-28 09:00:23.169384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.622 [2024-09-28 09:00:23.169398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48080 len:8 PRP1 0x0 PRP2 0x0 00:22:56.622 [2024-09-28 09:00:23.169414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.169431] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.622 [2024-09-28 09:00:23.169445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.622 [2024-09-28 09:00:23.169459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48088 len:8 PRP1 0x0 PRP2 0x0 00:22:56.622 [2024-09-28 09:00:23.169484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.169503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.622 [2024-09-28 09:00:23.169517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.622 [2024-09-28 09:00:23.169532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48096 len:8 PRP1 0x0 PRP2 0x0 00:22:56.622 [2024-09-28 09:00:23.169548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.169566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.622 [2024-09-28 09:00:23.169579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.622 [2024-09-28 09:00:23.169593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48104 len:8 PRP1 0x0 PRP2 0x0 00:22:56.622 [2024-09-28 09:00:23.169609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.622 [2024-09-28 09:00:23.169626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.622 [2024-09-28 09:00:23.169640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.622 [2024-09-28 09:00:23.169654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48112 len:8 PRP1 0x0 PRP2 0x0 00:22:56.622 [2024-09-28 09:00:23.169670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.169686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.169700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.169714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48120 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.169731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.169748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.169761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.169775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48128 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.169791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.169808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.169855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.169873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48136 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.169890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.169909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.169923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.169937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48144 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.169954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.169972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.169986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.170009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48152 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.170028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.170046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.170060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.170075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48160 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.170092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.170109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.170123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.170137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48168 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.170159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.170177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.170191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.170205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48176 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.170222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.170239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.170253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.170267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48184 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.170299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.170316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.170329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.170343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48192 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.170360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.170377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.170393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.170407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48200 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.170423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.170441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.170454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.170468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48208 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.170484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.170502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.170526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.170542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48216 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.170559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.170576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.170590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.170604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48224 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.170620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.170637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.623 [2024-09-28 09:00:23.170651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.623 [2024-09-28 09:00:23.170665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48232 len:8 PRP1 0x0 PRP2 0x0 00:22:56.623 [2024-09-28 09:00:23.170682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.170952] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002ba00 was disconnected and freed. reset controller. 00:22:56.623 [2024-09-28 09:00:23.170982] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:22:56.623 [2024-09-28 09:00:23.171049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.623 [2024-09-28 09:00:23.171077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.171100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.623 [2024-09-28 09:00:23.171118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.171136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.623 [2024-09-28 09:00:23.171153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.171171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.623 [2024-09-28 09:00:23.171189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:23.171206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:56.623 [2024-09-28 09:00:23.171290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:56.623 [2024-09-28 09:00:23.175122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:56.623 [2024-09-28 09:00:23.214962] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:56.623 7777.60 IOPS, 30.38 MiB/s 7889.33 IOPS, 30.82 MiB/s 7977.14 IOPS, 31.16 MiB/s 8030.00 IOPS, 31.37 MiB/s 8062.22 IOPS, 31.49 MiB/s [2024-09-28 09:00:27.696140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.623 [2024-09-28 09:00:27.696211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:27.696278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.623 [2024-09-28 09:00:27.696307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:27.696328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.623 [2024-09-28 09:00:27.696347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:27.696366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.623 [2024-09-28 09:00:27.696383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:27.696402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.623 [2024-09-28 09:00:27.696420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:27.696439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.623 [2024-09-28 09:00:27.696457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:27.696475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.623 [2024-09-28 09:00:27.696493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.623 [2024-09-28 09:00:27.696512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.696530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.696549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.696567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.696586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.696604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.696623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.696640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.696659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.696677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.696695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.696713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.696732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.696760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.696782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.696858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.696881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.696901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.696921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.696940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.696962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.696982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.697021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.697060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.697099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.697153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.697204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.697241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.697277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.697314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.697371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.697411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.697449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.697485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.624 [2024-09-28 09:00:27.697522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.697558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.697595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.697633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.697669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.697706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.697760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.697798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.697836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.697899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.697937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.697974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.697994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.698012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.698040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.698058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.698077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.698095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.698114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.698132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.698152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.624 [2024-09-28 09:00:27.698170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.624 [2024-09-28 09:00:27.698189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.698207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.698244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.698283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.698322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.698367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.698406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.698443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.698481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.625 [2024-09-28 09:00:27.698519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.625 [2024-09-28 09:00:27.698556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.625 [2024-09-28 09:00:27.698594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.625 [2024-09-28 09:00:27.698631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.625 [2024-09-28 09:00:27.698669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.625 [2024-09-28 09:00:27.698706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.625 [2024-09-28 09:00:27.698743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.625 [2024-09-28 09:00:27.698780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.625 [2024-09-28 09:00:27.698849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.698902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.698940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.698978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.698998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.699016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.699053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.699091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.699129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.699167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.699205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.699242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.699279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.699316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.699361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.699400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.699438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.625 [2024-09-28 09:00:27.699477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.625 [2024-09-28 09:00:27.699515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.625 [2024-09-28 09:00:27.699553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.625 [2024-09-28 09:00:27.699572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.625 [2024-09-28 09:00:27.699590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.699610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.699628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.699647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.699665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.699684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.699703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.699729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.699748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.699767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.699785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.699816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.699837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.699856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.699883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.699904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.699923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.699942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.699960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.699980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.699998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.700036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.700074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.700112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.700149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.700187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.700224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.700262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.700299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.700336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.700384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.700422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.700459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.700496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.700534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.700571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.700609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.700650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.700687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.626 [2024-09-28 09:00:27.700724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.700762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.700853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.700906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.700947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.700968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.700987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.701008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.701028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.701049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.701068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.701088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.701107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.701142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.701161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.701195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.701213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.626 [2024-09-28 09:00:27.701232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.626 [2024-09-28 09:00:27.701251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.627 [2024-09-28 09:00:27.701270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.627 [2024-09-28 09:00:27.701287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.627 [2024-09-28 09:00:27.701307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.627 [2024-09-28 09:00:27.701325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.627 [2024-09-28 09:00:27.701347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.627 [2024-09-28 09:00:27.701366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.627 [2024-09-28 09:00:27.701385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.627 [2024-09-28 09:00:27.701403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.627 [2024-09-28 09:00:27.701444] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:22:56.627 [2024-09-28 09:00:27.701470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.627 [2024-09-28 09:00:27.701485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.627 [2024-09-28 09:00:27.701500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97320 len:8 PRP1 0x0 PRP2 0x0 00:22:56.627 [2024-09-28 09:00:27.701517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.627 [2024-09-28 09:00:27.701749] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002c180 was disconnected and freed. reset controller. 00:22:56.627 [2024-09-28 09:00:27.701776] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:22:56.627 [2024-09-28 09:00:27.701852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.627 [2024-09-28 09:00:27.701880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.627 [2024-09-28 09:00:27.701900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.627 [2024-09-28 09:00:27.701917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.627 [2024-09-28 09:00:27.701934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.627 [2024-09-28 09:00:27.701951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.627 [2024-09-28 09:00:27.701969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.627 [2024-09-28 09:00:27.701986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.627 [2024-09-28 09:00:27.702002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:56.627 [2024-09-28 09:00:27.702066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:56.627 [2024-09-28 09:00:27.705737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:56.627 [2024-09-28 09:00:27.741468] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:56.627 8047.10 IOPS, 31.43 MiB/s 8078.64 IOPS, 31.56 MiB/s 8106.00 IOPS, 31.66 MiB/s 8125.62 IOPS, 31.74 MiB/s 8136.57 IOPS, 31.78 MiB/s 8146.60 IOPS, 31.82 MiB/s 00:22:56.627 Latency(us) 00:22:56.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.627 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:56.627 Verification LBA range: start 0x0 length 0x4000 00:22:56.627 NVMe0n1 : 15.01 8147.97 31.83 225.80 0.00 15254.78 655.36 17635.14 00:22:56.627 =================================================================================================================== 00:22:56.627 Total : 8147.97 31.83 225.80 0.00 15254.78 655.36 17635.14 00:22:56.627 Received shutdown signal, test time was about 15.000000 seconds 00:22:56.627 00:22:56.627 Latency(us) 00:22:56.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.627 =================================================================================================================== 00:22:56.627 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:56.627 09:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:56.627 09:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:56.627 09:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:56.627 09:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=81434 00:22:56.627 09:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:56.627 09:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 81434 /var/tmp/bdevperf.sock 00:22:56.627 09:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 81434 ']' 00:22:56.627 09:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:56.627 09:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:56.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:56.627 09:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:56.627 09:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:56.627 09:00:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:57.563 09:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:57.563 09:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:22:57.563 09:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:57.821 [2024-09-28 09:00:35.728067] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:57.821 09:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:58.079 [2024-09-28 09:00:35.956204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:22:58.079 09:00:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:58.338 NVMe0n1 00:22:58.338 09:00:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:58.597 00:22:58.597 09:00:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.164 00:22:59.164 09:00:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:59.164 09:00:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:59.164 09:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.732 09:00:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:03.016 09:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:03.016 09:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:03.016 09:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:03.016 09:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=81517 00:23:03.016 09:00:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 81517 00:23:03.951 { 00:23:03.951 "results": [ 00:23:03.951 { 00:23:03.951 "job": "NVMe0n1", 00:23:03.951 "core_mask": "0x1", 00:23:03.951 "workload": "verify", 00:23:03.951 "status": "finished", 00:23:03.951 "verify_range": { 00:23:03.951 "start": 0, 00:23:03.951 "length": 16384 00:23:03.951 }, 00:23:03.951 "queue_depth": 128, 00:23:03.951 "io_size": 4096, 00:23:03.951 "runtime": 1.010386, 00:23:03.951 "iops": 6361.92504646739, 00:23:03.951 "mibps": 24.85126971276324, 00:23:03.951 "io_failed": 0, 00:23:03.951 "io_timeout": 0, 00:23:03.951 "avg_latency_us": 20044.285885614077, 00:23:03.951 "min_latency_us": 2651.2290909090907, 00:23:03.951 "max_latency_us": 17873.454545454544 00:23:03.951 } 00:23:03.951 ], 00:23:03.951 "core_count": 1 00:23:03.951 } 00:23:03.951 09:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:03.951 [2024-09-28 09:00:34.538669] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:03.951 [2024-09-28 09:00:34.539398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81434 ] 00:23:03.951 [2024-09-28 09:00:34.706051] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.951 [2024-09-28 09:00:34.865293] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.951 [2024-09-28 09:00:35.018050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:03.951 [2024-09-28 09:00:37.418545] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:23:03.951 [2024-09-28 09:00:37.419161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.951 [2024-09-28 09:00:37.419212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.951 [2024-09-28 09:00:37.419272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.951 [2024-09-28 09:00:37.419293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.951 [2024-09-28 09:00:37.419311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.951 [2024-09-28 09:00:37.419331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.951 [2024-09-28 09:00:37.419349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.951 [2024-09-28 09:00:37.419371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.951 [2024-09-28 09:00:37.419389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:03.951 [2024-09-28 09:00:37.419491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:03.951 [2024-09-28 09:00:37.419541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:23:03.951 [2024-09-28 09:00:37.427740] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:03.951 Running I/O for 1 seconds... 00:23:03.951 6300.00 IOPS, 24.61 MiB/s 00:23:03.951 Latency(us) 00:23:03.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.951 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:03.951 Verification LBA range: start 0x0 length 0x4000 00:23:03.951 NVMe0n1 : 1.01 6361.93 24.85 0.00 0.00 20044.29 2651.23 17873.45 00:23:03.951 =================================================================================================================== 00:23:03.951 Total : 6361.93 24.85 0.00 0.00 20044.29 2651.23 17873.45 00:23:03.951 09:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:03.951 09:00:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.210 09:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:04.468 09:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:04.468 09:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.727 09:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:04.986 09:00:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:08.272 09:00:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:08.272 09:00:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:08.272 09:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 81434 00:23:08.272 09:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 81434 ']' 00:23:08.272 09:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 81434 00:23:08.272 09:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:08.272 09:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:08.272 09:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81434 00:23:08.272 killing process with pid 81434 00:23:08.272 09:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:08.272 09:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:08.272 09:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81434' 00:23:08.272 09:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 81434 00:23:08.272 09:00:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 81434 00:23:09.651 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:09.651 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:09.652 rmmod nvme_tcp 00:23:09.652 rmmod nvme_fabrics 00:23:09.652 rmmod nvme_keyring 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 81172 ']' 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 81172 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 81172 ']' 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 81172 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81172 00:23:09.652 killing process with pid 81172 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81172' 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 81172 00:23:09.652 09:00:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 81172 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:23:11.031 00:23:11.031 real 0m35.847s 00:23:11.031 user 2m16.010s 00:23:11.031 sys 0m5.793s 00:23:11.031 ************************************ 00:23:11.031 END TEST nvmf_failover 00:23:11.031 ************************************ 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.031 ************************************ 00:23:11.031 START TEST nvmf_host_discovery 00:23:11.031 ************************************ 00:23:11.031 09:00:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:11.291 * Looking for test storage... 00:23:11.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:11.291 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:11.291 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:23:11.291 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:11.291 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:11.291 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.291 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:11.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.292 --rc genhtml_branch_coverage=1 00:23:11.292 --rc genhtml_function_coverage=1 00:23:11.292 --rc genhtml_legend=1 00:23:11.292 --rc geninfo_all_blocks=1 00:23:11.292 --rc geninfo_unexecuted_blocks=1 00:23:11.292 00:23:11.292 ' 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:11.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.292 --rc genhtml_branch_coverage=1 00:23:11.292 --rc genhtml_function_coverage=1 00:23:11.292 --rc genhtml_legend=1 00:23:11.292 --rc geninfo_all_blocks=1 00:23:11.292 --rc geninfo_unexecuted_blocks=1 00:23:11.292 00:23:11.292 ' 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:11.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.292 --rc genhtml_branch_coverage=1 00:23:11.292 --rc genhtml_function_coverage=1 00:23:11.292 --rc genhtml_legend=1 00:23:11.292 --rc geninfo_all_blocks=1 00:23:11.292 --rc geninfo_unexecuted_blocks=1 00:23:11.292 00:23:11.292 ' 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:11.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.292 --rc genhtml_branch_coverage=1 00:23:11.292 --rc genhtml_function_coverage=1 00:23:11.292 --rc genhtml_legend=1 00:23:11.292 --rc geninfo_all_blocks=1 00:23:11.292 --rc geninfo_unexecuted_blocks=1 00:23:11.292 00:23:11.292 ' 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.292 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:11.292 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:11.293 Cannot find device "nvmf_init_br" 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:11.293 Cannot find device "nvmf_init_br2" 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:11.293 Cannot find device "nvmf_tgt_br" 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:11.293 Cannot find device "nvmf_tgt_br2" 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:11.293 Cannot find device "nvmf_init_br" 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:23:11.293 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:11.552 Cannot find device "nvmf_init_br2" 00:23:11.552 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:23:11.552 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:11.552 Cannot find device "nvmf_tgt_br" 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:11.553 Cannot find device "nvmf_tgt_br2" 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:11.553 Cannot find device "nvmf_br" 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:11.553 Cannot find device "nvmf_init_if" 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:11.553 Cannot find device "nvmf_init_if2" 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:11.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:11.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:11.553 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:11.812 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:11.812 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:11.812 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:11.813 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:11.813 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:23:11.813 00:23:11.813 --- 10.0.0.3 ping statistics --- 00:23:11.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.813 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:11.813 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:11.813 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:23:11.813 00:23:11.813 --- 10.0.0.4 ping statistics --- 00:23:11.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.813 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:11.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:23:11.813 00:23:11.813 --- 10.0.0.1 ping statistics --- 00:23:11.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.813 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:11.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:23:11.813 00:23:11.813 --- 10.0.0.2 ping statistics --- 00:23:11.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.813 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # return 0 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=81852 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 81852 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 81852 ']' 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:11.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:11.813 09:00:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:11.813 [2024-09-28 09:00:49.747343] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:11.813 [2024-09-28 09:00:49.747770] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.072 [2024-09-28 09:00:49.919865] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.353 [2024-09-28 09:00:50.075966] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.353 [2024-09-28 09:00:50.076026] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.353 [2024-09-28 09:00:50.076044] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.353 [2024-09-28 09:00:50.076059] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.353 [2024-09-28 09:00:50.076069] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.353 [2024-09-28 09:00:50.076103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.353 [2024-09-28 09:00:50.220016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.952 [2024-09-28 09:00:50.775466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.952 [2024-09-28 09:00:50.783617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.952 null0 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.952 null1 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=81885 00:23:12.952 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:12.953 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 81885 /tmp/host.sock 00:23:12.953 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 81885 ']' 00:23:12.953 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:23:12.953 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:12.953 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:12.953 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:12.953 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:12.953 09:00:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.953 [2024-09-28 09:00:50.936592] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:12.953 [2024-09-28 09:00:50.937095] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81885 ] 00:23:13.212 [2024-09-28 09:00:51.111127] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.471 [2024-09-28 09:00:51.331468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.731 [2024-09-28 09:00:51.479969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:13.990 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:13.990 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:13.990 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.990 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:13.990 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:13.991 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:14.250 09:00:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:14.250 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:14.251 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.510 [2024-09-28 09:00:52.276089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.510 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.511 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.511 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:14.511 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:14.511 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:14.511 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:14.511 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:14.511 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:14.511 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:14.511 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.511 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:14.511 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:14.511 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:14.511 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:14.511 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.770 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:23:14.770 09:00:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:15.029 [2024-09-28 09:00:52.922083] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:15.029 [2024-09-28 09:00:52.922123] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:15.029 [2024-09-28 09:00:52.922172] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:15.029 [2024-09-28 09:00:52.928156] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:23:15.029 [2024-09-28 09:00:52.994618] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:23:15.029 [2024-09-28 09:00:52.994788] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.597 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.858 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.858 [2024-09-28 09:00:53.849831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:15.858 [2024-09-28 09:00:53.850486] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:15.858 [2024-09-28 09:00:53.850584] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:16.119 [2024-09-28 09:00:53.856520] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.119 [2024-09-28 09:00:53.915327] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:23:16.119 [2024-09-28 09:00:53.915353] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:16.119 [2024-09-28 09:00:53.915364] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.119 09:00:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.119 [2024-09-28 09:00:54.083073] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:16.119 [2024-09-28 09:00:54.083120] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:16.119 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:16.119 [2024-09-28 09:00:54.089101] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:23:16.119 [2024-09-28 09:00:54.089294] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:23:16.119 [2024-09-28 09:00:54.089471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.119 [2024-09-28 09:00:54.089529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.120 [2024-09-28 09:00:54.089548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.120 [2024-09-28 09:00:54.089564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.120 [2024-09-28 09:00:54.089586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.120 [2024-09-28 09:00:54.089609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.120 [2024-09-28 09:00:54.089633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:16.120 [2024-09-28 09:00:54.089655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:16.120 [2024-09-28 09:00:54.089677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:23:16.120 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.120 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.120 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.120 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.120 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.120 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.120 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:16.379 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:16.380 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.640 09:00:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.577 [2024-09-28 09:00:55.512762] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:17.577 [2024-09-28 09:00:55.512846] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:17.577 [2024-09-28 09:00:55.512900] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:17.577 [2024-09-28 09:00:55.518848] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:23:17.837 [2024-09-28 09:00:55.580482] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:23:17.837 [2024-09-28 09:00:55.580535] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.837 request: 00:23:17.837 { 00:23:17.837 "name": "nvme", 00:23:17.837 "trtype": "tcp", 00:23:17.837 "traddr": "10.0.0.3", 00:23:17.837 "adrfam": "ipv4", 00:23:17.837 "trsvcid": "8009", 00:23:17.837 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:17.837 "wait_for_attach": true, 00:23:17.837 "method": "bdev_nvme_start_discovery", 00:23:17.837 "req_id": 1 00:23:17.837 } 00:23:17.837 Got JSON-RPC error response 00:23:17.837 response: 00:23:17.837 { 00:23:17.837 "code": -17, 00:23:17.837 "message": "File exists" 00:23:17.837 } 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.837 request: 00:23:17.837 { 00:23:17.837 "name": "nvme_second", 00:23:17.837 "trtype": "tcp", 00:23:17.837 "traddr": "10.0.0.3", 00:23:17.837 "adrfam": "ipv4", 00:23:17.837 "trsvcid": "8009", 00:23:17.837 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:17.837 "wait_for_attach": true, 00:23:17.837 "method": "bdev_nvme_start_discovery", 00:23:17.837 "req_id": 1 00:23:17.837 } 00:23:17.837 Got JSON-RPC error response 00:23:17.837 response: 00:23:17.837 { 00:23:17.837 "code": -17, 00:23:17.837 "message": "File exists" 00:23:17.837 } 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:17.837 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.095 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:18.095 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:18.095 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:18.095 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:18.096 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:18.096 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.096 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:18.096 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:18.096 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:18.096 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.096 09:00:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:19.032 [2024-09-28 09:00:56.856976] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.032 [2024-09-28 09:00:56.857234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c900 with addr=10.0.0.3, port=8010 00:23:19.032 [2024-09-28 09:00:56.857299] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:19.032 [2024-09-28 09:00:56.857315] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:19.032 [2024-09-28 09:00:56.857327] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:23:19.969 [2024-09-28 09:00:57.857011] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.969 [2024-09-28 09:00:57.857268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002cb80 with addr=10.0.0.3, port=8010 00:23:19.969 [2024-09-28 09:00:57.857333] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:19.969 [2024-09-28 09:00:57.857348] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:19.969 [2024-09-28 09:00:57.857362] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:23:20.907 [2024-09-28 09:00:58.856765] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:23:20.907 request: 00:23:20.907 { 00:23:20.907 "name": "nvme_second", 00:23:20.907 "trtype": "tcp", 00:23:20.907 "traddr": "10.0.0.3", 00:23:20.907 "adrfam": "ipv4", 00:23:20.907 "trsvcid": "8010", 00:23:20.907 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:20.907 "wait_for_attach": false, 00:23:20.907 "attach_timeout_ms": 3000, 00:23:20.907 "method": "bdev_nvme_start_discovery", 00:23:20.907 "req_id": 1 00:23:20.907 } 00:23:20.907 Got JSON-RPC error response 00:23:20.907 response: 00:23:20.907 { 00:23:20.907 "code": -110, 00:23:20.907 "message": "Connection timed out" 00:23:20.907 } 00:23:20.907 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:20.907 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:20.907 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:20.907 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:20.907 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:20.907 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:20.907 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:20.907 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:20.907 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:20.907 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.907 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:20.907 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:20.907 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.167 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:21.167 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:21.167 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 81885 00:23:21.167 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:21.167 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:21.167 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:21.167 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:21.167 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:21.167 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:21.167 09:00:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:21.167 rmmod nvme_tcp 00:23:21.167 rmmod nvme_fabrics 00:23:21.167 rmmod nvme_keyring 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 81852 ']' 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 81852 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 81852 ']' 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 81852 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81852 00:23:21.167 killing process with pid 81852 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81852' 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 81852 00:23:21.167 09:00:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 81852 00:23:22.104 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:22.104 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:22.104 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:22.104 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:22.104 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:23:22.104 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:22.104 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:23:22.104 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.104 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:22.104 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:22.104 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:22.104 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:22.104 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:23:22.363 00:23:22.363 real 0m11.290s 00:23:22.363 user 0m21.163s 00:23:22.363 sys 0m2.171s 00:23:22.363 ************************************ 00:23:22.363 END TEST nvmf_host_discovery 00:23:22.363 ************************************ 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.363 ************************************ 00:23:22.363 START TEST nvmf_host_multipath_status 00:23:22.363 ************************************ 00:23:22.363 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:22.623 * Looking for test storage... 00:23:22.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:22.623 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:22.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.624 --rc genhtml_branch_coverage=1 00:23:22.624 --rc genhtml_function_coverage=1 00:23:22.624 --rc genhtml_legend=1 00:23:22.624 --rc geninfo_all_blocks=1 00:23:22.624 --rc geninfo_unexecuted_blocks=1 00:23:22.624 00:23:22.624 ' 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:22.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.624 --rc genhtml_branch_coverage=1 00:23:22.624 --rc genhtml_function_coverage=1 00:23:22.624 --rc genhtml_legend=1 00:23:22.624 --rc geninfo_all_blocks=1 00:23:22.624 --rc geninfo_unexecuted_blocks=1 00:23:22.624 00:23:22.624 ' 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:22.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.624 --rc genhtml_branch_coverage=1 00:23:22.624 --rc genhtml_function_coverage=1 00:23:22.624 --rc genhtml_legend=1 00:23:22.624 --rc geninfo_all_blocks=1 00:23:22.624 --rc geninfo_unexecuted_blocks=1 00:23:22.624 00:23:22.624 ' 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:22.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.624 --rc genhtml_branch_coverage=1 00:23:22.624 --rc genhtml_function_coverage=1 00:23:22.624 --rc genhtml_legend=1 00:23:22.624 --rc geninfo_all_blocks=1 00:23:22.624 --rc geninfo_unexecuted_blocks=1 00:23:22.624 00:23:22.624 ' 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:22.624 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:22.624 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:22.625 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.625 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:22.625 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:22.625 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:22.625 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:22.625 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:22.625 Cannot find device "nvmf_init_br" 00:23:22.625 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:23:22.625 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:22.625 Cannot find device "nvmf_init_br2" 00:23:22.625 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:23:22.625 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:22.625 Cannot find device "nvmf_tgt_br" 00:23:22.625 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:23:22.625 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:22.625 Cannot find device "nvmf_tgt_br2" 00:23:22.625 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:23:22.625 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:22.884 Cannot find device "nvmf_init_br" 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:22.884 Cannot find device "nvmf_init_br2" 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:22.884 Cannot find device "nvmf_tgt_br" 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:22.884 Cannot find device "nvmf_tgt_br2" 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:22.884 Cannot find device "nvmf_br" 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:22.884 Cannot find device "nvmf_init_if" 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:22.884 Cannot find device "nvmf_init_if2" 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:22.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:22.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:22.884 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:23.143 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:23.143 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:23:23.143 00:23:23.143 --- 10.0.0.3 ping statistics --- 00:23:23.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.143 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:23.143 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:23.143 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:23:23.143 00:23:23.143 --- 10.0.0.4 ping statistics --- 00:23:23.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.143 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:23.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:23:23.143 00:23:23.143 --- 10.0.0.1 ping statistics --- 00:23:23.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.143 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:23.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:23:23.143 00:23:23.143 --- 10.0.0.2 ping statistics --- 00:23:23.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.143 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # return 0 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=82406 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 82406 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 82406 ']' 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:23.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:23.143 09:01:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:23.143 [2024-09-28 09:01:01.086487] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:23.144 [2024-09-28 09:01:01.086653] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.402 [2024-09-28 09:01:01.262324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:23.661 [2024-09-28 09:01:01.480062] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.661 [2024-09-28 09:01:01.480123] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.661 [2024-09-28 09:01:01.480165] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.661 [2024-09-28 09:01:01.480177] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.661 [2024-09-28 09:01:01.480189] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.661 [2024-09-28 09:01:01.480377] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.661 [2024-09-28 09:01:01.480397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.661 [2024-09-28 09:01:01.638135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:24.228 09:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:24.228 09:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:24.228 09:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:24.228 09:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:24.228 09:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:24.228 09:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.228 09:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=82406 00:23:24.229 09:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:24.487 [2024-09-28 09:01:02.310344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.487 09:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:24.746 Malloc0 00:23:24.746 09:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:25.006 09:01:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.264 09:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:25.523 [2024-09-28 09:01:03.378169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:25.523 09:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:25.782 [2024-09-28 09:01:03.598293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:25.782 09:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=82462 00:23:25.782 09:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:25.782 09:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:25.782 09:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 82462 /var/tmp/bdevperf.sock 00:23:25.782 09:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 82462 ']' 00:23:25.782 09:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.782 09:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.782 09:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.782 09:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.782 09:01:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:26.718 09:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.718 09:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:23:26.718 09:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:26.977 09:01:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:27.235 Nvme0n1 00:23:27.235 09:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:27.494 Nvme0n1 00:23:27.494 09:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:27.494 09:01:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:30.023 09:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:30.023 09:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:23:30.023 09:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:30.023 09:01:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:31.398 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:31.398 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:31.398 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.398 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:31.398 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.398 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:31.398 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.398 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:31.657 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:31.657 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:31.657 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.657 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:31.916 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.917 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:31.917 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.917 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:32.176 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.176 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:32.176 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.176 09:01:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:32.435 09:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.435 09:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:32.435 09:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.435 09:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:32.694 09:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.694 09:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:32.694 09:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:32.953 09:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:33.212 09:01:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:34.150 09:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:34.150 09:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:34.150 09:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.150 09:01:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:34.420 09:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:34.420 09:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:34.420 09:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.420 09:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:34.708 09:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.708 09:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:34.708 09:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:34.708 09:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.970 09:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.970 09:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:34.970 09:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.970 09:01:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:35.229 09:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.229 09:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:35.229 09:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.229 09:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:35.489 09:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.489 09:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:35.489 09:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.489 09:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:35.748 09:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.748 09:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:35.748 09:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:36.007 09:01:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:23:36.266 09:01:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:37.203 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:37.203 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:37.203 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.203 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:37.461 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.461 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:37.461 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.461 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:37.720 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:37.721 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:37.721 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:37.721 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.979 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.979 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:37.980 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.980 09:01:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:38.238 09:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.238 09:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:38.238 09:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:38.238 09:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.500 09:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.500 09:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:38.500 09:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.500 09:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:38.760 09:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.760 09:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:38.760 09:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:39.018 09:01:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:39.276 09:01:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:40.212 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:40.212 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:40.212 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.212 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:40.471 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.471 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:40.471 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.471 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:40.730 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:40.730 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:40.730 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.730 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:40.988 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.988 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:40.989 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.989 09:01:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:41.247 09:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.247 09:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:41.247 09:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.247 09:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:41.506 09:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.506 09:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:41.507 09:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.507 09:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:41.765 09:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:41.765 09:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:41.765 09:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:42.024 09:01:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:42.282 09:01:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:43.658 09:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:43.658 09:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:43.658 09:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.658 09:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:43.658 09:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:43.658 09:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:43.658 09:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:43.658 09:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.917 09:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:43.917 09:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:43.917 09:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:43.917 09:01:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:44.175 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.175 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:44.175 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.175 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:44.434 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.434 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:44.434 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.434 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:44.692 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:44.692 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:44.692 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:44.692 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.950 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:44.950 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:44.950 09:01:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:45.209 09:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:45.468 09:01:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:46.404 09:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:46.404 09:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:46.404 09:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.404 09:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:46.663 09:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:46.663 09:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:46.663 09:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:46.663 09:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:46.922 09:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:46.922 09:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:46.922 09:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:46.922 09:01:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.181 09:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.181 09:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:47.181 09:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:47.181 09:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.440 09:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.440 09:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:47.440 09:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:47.440 09:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.699 09:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.699 09:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:47.699 09:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.699 09:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:47.958 09:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.958 09:01:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:48.217 09:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:48.217 09:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:23:48.476 09:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:48.735 09:01:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:49.671 09:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:49.671 09:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:49.671 09:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.671 09:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:49.930 09:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.930 09:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:49.930 09:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:49.930 09:01:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:50.188 09:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.188 09:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:50.188 09:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.188 09:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:50.755 09:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.755 09:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:50.755 09:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.755 09:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:51.014 09:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.014 09:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:51.014 09:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:51.014 09:01:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.274 09:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.274 09:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:51.274 09:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:51.274 09:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.532 09:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.532 09:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:51.532 09:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:51.790 09:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:51.790 09:01:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:53.164 09:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:53.164 09:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:53.164 09:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.164 09:01:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:53.164 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:53.164 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:53.164 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:53.164 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.422 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.422 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:53.422 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:53.422 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.693 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.693 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:53.693 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.693 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:53.982 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:53.982 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:53.982 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.982 09:01:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:54.240 09:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.240 09:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:54.240 09:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.240 09:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:54.498 09:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.498 09:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:54.498 09:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:54.756 09:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:23:55.014 09:01:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:55.948 09:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:55.948 09:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:55.948 09:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.948 09:01:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:56.516 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.516 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:56.516 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.516 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:56.516 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.516 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:56.516 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.516 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:56.776 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:56.776 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:56.776 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:56.776 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:57.035 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.035 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:57.035 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.035 09:01:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:57.293 09:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.293 09:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:57.293 09:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:57.293 09:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:57.861 09:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:57.861 09:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:57.861 09:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:57.861 09:01:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:58.119 09:01:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:59.055 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:59.056 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:59.056 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.056 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:59.622 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.622 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:59.622 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.622 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:59.622 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:59.622 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:59.622 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.622 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:59.880 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:59.880 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:59.880 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:59.880 09:01:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:00.139 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.139 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:00.139 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.139 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:00.397 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:00.397 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:00.397 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:00.397 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:00.655 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:00.655 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 82462 00:24:00.655 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 82462 ']' 00:24:00.655 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 82462 00:24:00.655 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:00.655 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:00.655 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82462 00:24:00.655 killing process with pid 82462 00:24:00.655 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:00.655 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:00.655 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82462' 00:24:00.655 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 82462 00:24:00.655 09:01:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 82462 00:24:00.914 { 00:24:00.914 "results": [ 00:24:00.914 { 00:24:00.914 "job": "Nvme0n1", 00:24:00.914 "core_mask": "0x4", 00:24:00.914 "workload": "verify", 00:24:00.914 "status": "terminated", 00:24:00.914 "verify_range": { 00:24:00.914 "start": 0, 00:24:00.914 "length": 16384 00:24:00.914 }, 00:24:00.914 "queue_depth": 128, 00:24:00.914 "io_size": 4096, 00:24:00.914 "runtime": 33.073254, 00:24:00.914 "iops": 8145.070938589834, 00:24:00.914 "mibps": 31.81668335386654, 00:24:00.914 "io_failed": 0, 00:24:00.914 "io_timeout": 0, 00:24:00.914 "avg_latency_us": 15684.605471121993, 00:24:00.914 "min_latency_us": 266.24, 00:24:00.914 "max_latency_us": 4026531.84 00:24:00.914 } 00:24:00.914 ], 00:24:00.914 "core_count": 1 00:24:00.915 } 00:24:01.854 09:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 82462 00:24:01.854 09:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:01.854 [2024-09-28 09:01:03.698766] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:01.854 [2024-09-28 09:01:03.698920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82462 ] 00:24:01.854 [2024-09-28 09:01:03.857422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.854 [2024-09-28 09:01:04.072319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.854 [2024-09-28 09:01:04.239330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:01.854 [2024-09-28 09:01:05.440694] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:24:01.854 Running I/O for 90 seconds... 00:24:01.854 7667.00 IOPS, 29.95 MiB/s 8289.50 IOPS, 32.38 MiB/s 8458.33 IOPS, 33.04 MiB/s 8502.50 IOPS, 33.21 MiB/s 8493.20 IOPS, 33.18 MiB/s 8524.17 IOPS, 33.30 MiB/s 8531.57 IOPS, 33.33 MiB/s 8527.12 IOPS, 33.31 MiB/s 8547.11 IOPS, 33.39 MiB/s 8563.60 IOPS, 33.45 MiB/s 8564.00 IOPS, 33.45 MiB/s 8584.33 IOPS, 33.53 MiB/s 8593.54 IOPS, 33.57 MiB/s 8591.71 IOPS, 33.56 MiB/s [2024-09-28 09:01:19.973175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.854 [2024-09-28 09:01:19.973265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.973356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.854 [2024-09-28 09:01:19.973387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.973417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.854 [2024-09-28 09:01:19.973437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.973462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.854 [2024-09-28 09:01:19.973481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.973506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.854 [2024-09-28 09:01:19.973524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.973549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.854 [2024-09-28 09:01:19.973568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.973594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.973612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.973637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.973655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.973680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.973719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.973748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.973766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.973791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.973809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.973852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.973872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.973913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.973932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.973959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.973978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.974004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.974023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.974049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.974067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.974093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.974112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.974138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.974156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.974182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.974201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.974226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.974245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.974271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.974301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.974330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.854 [2024-09-28 09:01:19.974349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.974375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.854 [2024-09-28 09:01:19.974395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.974435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.854 [2024-09-28 09:01:19.974455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.974487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.854 [2024-09-28 09:01:19.974508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.974533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.854 [2024-09-28 09:01:19.974554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.974579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.854 [2024-09-28 09:01:19.974598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:01.854 [2024-09-28 09:01:19.974623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.854 [2024-09-28 09:01:19.974641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.974666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.974685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.974711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.974748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.974774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.974794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.974820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.974853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.974897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.974927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.974956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.974976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.975021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.975065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.975112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.855 [2024-09-28 09:01:19.975157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.855 [2024-09-28 09:01:19.975201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.855 [2024-09-28 09:01:19.975247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.855 [2024-09-28 09:01:19.975292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.855 [2024-09-28 09:01:19.975351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.855 [2024-09-28 09:01:19.975396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.855 [2024-09-28 09:01:19.975439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.855 [2024-09-28 09:01:19.975482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.975536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.975580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.975624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.975668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.975711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.975755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.975798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.975859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.975907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.975967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.975994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.976014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.976064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.976088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.976127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.976148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.976174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.976193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.976219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.976238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.976263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.976282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.976308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.976328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.976368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.976387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.976412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.855 [2024-09-28 09:01:19.976431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.976456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.855 [2024-09-28 09:01:19.976474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.976500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.855 [2024-09-28 09:01:19.976519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:01.855 [2024-09-28 09:01:19.976545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.855 [2024-09-28 09:01:19.976564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.976590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.976608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.976634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.976653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.976678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.976704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.976731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.976750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.976776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.976834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.976888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.976910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.976937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.976957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.976984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.977003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.977049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.977095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.977141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.977200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.977245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.977290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.977357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.977403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.977447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.977493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.977537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.977581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.856 [2024-09-28 09:01:19.977626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.977669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.977713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.977757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.977800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.977855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.977935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.977990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.978011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.978038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.978059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.978085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.978105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.978132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.978151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.978178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.978198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.978224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.978244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.978271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.978290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.978331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.978358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.978386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.978405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.978431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.856 [2024-09-28 09:01:19.978450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:01.856 [2024-09-28 09:01:19.978476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.978495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.978521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.978540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.978574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.978595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.979551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.979586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.979628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.979650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.979683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.979704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.979736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.979756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.979788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.979808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.979854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:19.979893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.979928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:19.979948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.979982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:19.980002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.980036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:19.980056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.980089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:19.980109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.980142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:19.980165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.980200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:19.980233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.980288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:19.980328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.980363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.980384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.980416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.980436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.980468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.980488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.980520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.980539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.980572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.980591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.980624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.980644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.980676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.980695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:19.980727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:19.980747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:01.857 8249.87 IOPS, 32.23 MiB/s 7734.25 IOPS, 30.21 MiB/s 7279.29 IOPS, 28.43 MiB/s 6874.89 IOPS, 26.86 MiB/s 6784.47 IOPS, 26.50 MiB/s 6870.05 IOPS, 26.84 MiB/s 6983.76 IOPS, 27.28 MiB/s 7212.41 IOPS, 28.17 MiB/s 7412.09 IOPS, 28.95 MiB/s 7582.29 IOPS, 29.62 MiB/s 7621.72 IOPS, 29.77 MiB/s 7652.27 IOPS, 29.89 MiB/s 7676.41 IOPS, 29.99 MiB/s 7791.64 IOPS, 30.44 MiB/s 7933.17 IOPS, 30.99 MiB/s 8056.73 IOPS, 31.47 MiB/s [2024-09-28 09:01:36.012542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:36.012633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:36.012724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:36.012752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:36.012781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:36.012842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:36.012873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:36.012894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:36.012921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:36.012939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:36.012966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:36.012986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:36.013012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:36.013033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:36.013059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.857 [2024-09-28 09:01:36.013078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:36.013105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:36.013123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:36.013150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:36.013182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:36.013222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:36.013239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:36.013264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:36.013282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:36.013307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:36.013324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:36.013369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.857 [2024-09-28 09:01:36.013401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:01.857 [2024-09-28 09:01:36.013428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.013447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.013472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.013490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.013516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.013534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.013561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.013580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.013605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.013624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.013649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.013667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.013692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.013710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.013735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.013753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.013778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.013796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.013821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.013856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.013885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.013905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.013930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.013957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.013984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.014002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.014027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.014045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.014070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.014088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.014113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.014133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.014158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.014177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.014202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.014221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.014268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.014292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.014319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.014339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.014364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.014382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.014407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.014425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.014451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.014469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.014495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.014514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.015600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.015636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.015681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.015708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.015735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.015755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.015781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.015800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.015845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.015865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.015891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.015909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.015935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.015953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.015978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.015997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.016023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:01.858 [2024-09-28 09:01:36.016041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:01.858 [2024-09-28 09:01:36.016067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.858 [2024-09-28 09:01:36.016086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:01.859 [2024-09-28 09:01:36.016112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.859 [2024-09-28 09:01:36.016130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:01.859 [2024-09-28 09:01:36.016155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.859 [2024-09-28 09:01:36.016174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:01.859 [2024-09-28 09:01:36.016213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:01.859 [2024-09-28 09:01:36.016234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:01.859 8120.26 IOPS, 31.72 MiB/s 8135.25 IOPS, 31.78 MiB/s 8146.91 IOPS, 31.82 MiB/s Received shutdown signal, test time was about 33.074213 seconds 00:24:01.859 00:24:01.859 Latency(us) 00:24:01.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.859 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:01.859 Verification LBA range: start 0x0 length 0x4000 00:24:01.859 Nvme0n1 : 33.07 8145.07 31.82 0.00 0.00 15684.61 266.24 4026531.84 00:24:01.859 =================================================================================================================== 00:24:01.859 Total : 8145.07 31.82 0.00 0.00 15684.61 266.24 4026531.84 00:24:01.859 [2024-09-28 09:01:38.651907] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:24:01.859 09:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:02.117 09:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:02.117 09:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:02.117 09:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:02.117 09:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:02.117 09:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:02.117 09:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:02.117 09:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:02.118 09:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:02.118 09:01:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:02.118 rmmod nvme_tcp 00:24:02.118 rmmod nvme_fabrics 00:24:02.118 rmmod nvme_keyring 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 82406 ']' 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 82406 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 82406 ']' 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 82406 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82406 00:24:02.118 killing process with pid 82406 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82406' 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 82406 00:24:02.118 09:01:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 82406 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:03.495 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:24:03.496 00:24:03.496 real 0m40.972s 00:24:03.496 user 2m10.134s 00:24:03.496 sys 0m10.970s 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:03.496 ************************************ 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:03.496 END TEST nvmf_host_multipath_status 00:24:03.496 ************************************ 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.496 ************************************ 00:24:03.496 START TEST nvmf_discovery_remove_ifc 00:24:03.496 ************************************ 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:03.496 * Looking for test storage... 00:24:03.496 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:03.496 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:03.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.756 --rc genhtml_branch_coverage=1 00:24:03.756 --rc genhtml_function_coverage=1 00:24:03.756 --rc genhtml_legend=1 00:24:03.756 --rc geninfo_all_blocks=1 00:24:03.756 --rc geninfo_unexecuted_blocks=1 00:24:03.756 00:24:03.756 ' 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:03.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.756 --rc genhtml_branch_coverage=1 00:24:03.756 --rc genhtml_function_coverage=1 00:24:03.756 --rc genhtml_legend=1 00:24:03.756 --rc geninfo_all_blocks=1 00:24:03.756 --rc geninfo_unexecuted_blocks=1 00:24:03.756 00:24:03.756 ' 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:03.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.756 --rc genhtml_branch_coverage=1 00:24:03.756 --rc genhtml_function_coverage=1 00:24:03.756 --rc genhtml_legend=1 00:24:03.756 --rc geninfo_all_blocks=1 00:24:03.756 --rc geninfo_unexecuted_blocks=1 00:24:03.756 00:24:03.756 ' 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:03.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.756 --rc genhtml_branch_coverage=1 00:24:03.756 --rc genhtml_function_coverage=1 00:24:03.756 --rc genhtml_legend=1 00:24:03.756 --rc geninfo_all_blocks=1 00:24:03.756 --rc geninfo_unexecuted_blocks=1 00:24:03.756 00:24:03.756 ' 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.756 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:03.757 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:03.757 Cannot find device "nvmf_init_br" 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:03.757 Cannot find device "nvmf_init_br2" 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:03.757 Cannot find device "nvmf_tgt_br" 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:03.757 Cannot find device "nvmf_tgt_br2" 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:03.757 Cannot find device "nvmf_init_br" 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:03.757 Cannot find device "nvmf_init_br2" 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:03.757 Cannot find device "nvmf_tgt_br" 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:03.757 Cannot find device "nvmf_tgt_br2" 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:03.757 Cannot find device "nvmf_br" 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:03.757 Cannot find device "nvmf_init_if" 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:03.757 Cannot find device "nvmf_init_if2" 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:03.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:03.757 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:03.757 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:04.016 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:04.017 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:04.017 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:24:04.017 00:24:04.017 --- 10.0.0.3 ping statistics --- 00:24:04.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.017 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:04.017 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:04.017 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:24:04.017 00:24:04.017 --- 10.0.0.4 ping statistics --- 00:24:04.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.017 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:04.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:24:04.017 00:24:04.017 --- 10.0.0.1 ping statistics --- 00:24:04.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.017 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:04.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:24:04.017 00:24:04.017 --- 10.0.0.2 ping statistics --- 00:24:04.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.017 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # return 0 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:04.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=83305 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 83305 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 83305 ']' 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:04.017 09:01:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:04.276 [2024-09-28 09:01:42.101655] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:04.276 [2024-09-28 09:01:42.102097] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.534 [2024-09-28 09:01:42.278967] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.534 [2024-09-28 09:01:42.507568] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.535 [2024-09-28 09:01:42.507649] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.535 [2024-09-28 09:01:42.507676] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.535 [2024-09-28 09:01:42.507698] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.535 [2024-09-28 09:01:42.507715] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.535 [2024-09-28 09:01:42.507765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.793 [2024-09-28 09:01:42.660184] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:05.360 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:05.360 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:05.361 [2024-09-28 09:01:43.108465] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.361 [2024-09-28 09:01:43.116639] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:24:05.361 null0 00:24:05.361 [2024-09-28 09:01:43.148569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=83337 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83337 /tmp/host.sock 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 83337 ']' 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:05.361 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:05.361 09:01:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:05.361 [2024-09-28 09:01:43.290824] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:24:05.361 [2024-09-28 09:01:43.291008] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83337 ] 00:24:05.620 [2024-09-28 09:01:43.466320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.879 [2024-09-28 09:01:43.683360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.137 09:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:06.137 09:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:06.137 09:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.137 09:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:06.137 09:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.137 09:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:06.396 09:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.396 09:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:06.396 09:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.397 09:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:06.397 [2024-09-28 09:01:44.282352] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:06.397 09:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.397 09:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:06.397 09:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.397 09:01:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.777 [2024-09-28 09:01:45.382699] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:07.777 [2024-09-28 09:01:45.382736] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:07.777 [2024-09-28 09:01:45.382769] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:07.777 [2024-09-28 09:01:45.388761] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:24:07.777 [2024-09-28 09:01:45.446359] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:07.777 [2024-09-28 09:01:45.446422] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:07.777 [2024-09-28 09:01:45.446482] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:07.777 [2024-09-28 09:01:45.446509] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:24:07.777 [2024-09-28 09:01:45.446542] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:07.777 [2024-09-28 09:01:45.451841] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002b500 was disconnected an 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:07.777 d freed. delete nvme_qpair. 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:07.777 09:01:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:08.713 09:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:08.713 09:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:08.713 09:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.713 09:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:08.713 09:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.713 09:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:08.713 09:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.713 09:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.713 09:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:08.713 09:01:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:09.649 09:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:09.649 09:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.649 09:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:09.649 09:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.649 09:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:09.649 09:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:09.649 09:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.906 09:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.907 09:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:09.907 09:01:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:10.842 09:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:10.842 09:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.842 09:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:10.842 09:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.842 09:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:10.843 09:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.843 09:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:10.843 09:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.843 09:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:10.843 09:01:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:11.780 09:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:11.780 09:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.780 09:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.780 09:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:11.780 09:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.780 09:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:11.780 09:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:11.780 09:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.040 09:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:12.040 09:01:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:12.976 09:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:12.976 09:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.976 09:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.976 09:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:12.976 09:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:12.976 09:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:12.976 09:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:12.976 09:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.976 09:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:12.976 09:01:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:12.976 [2024-09-28 09:01:50.874309] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:12.976 [2024-09-28 09:01:50.874399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.976 [2024-09-28 09:01:50.874420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.976 [2024-09-28 09:01:50.874442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.976 [2024-09-28 09:01:50.874454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.976 [2024-09-28 09:01:50.874465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.976 [2024-09-28 09:01:50.874477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.976 [2024-09-28 09:01:50.874488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.976 [2024-09-28 09:01:50.874499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.976 [2024-09-28 09:01:50.874510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.976 [2024-09-28 09:01:50.874522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.976 [2024-09-28 09:01:50.874532] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:24:12.976 [2024-09-28 09:01:50.884301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:24:12.976 [2024-09-28 09:01:50.894324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:13.914 09:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:13.914 09:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:13.914 09:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.914 09:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:13.914 09:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:13.914 09:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.915 09:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:14.176 [2024-09-28 09:01:51.955969] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:24:14.176 [2024-09-28 09:01:51.956413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:24:14.176 [2024-09-28 09:01:51.956941] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:24:14.176 [2024-09-28 09:01:51.957303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:24:14.176 [2024-09-28 09:01:51.958716] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:14.176 [2024-09-28 09:01:51.958891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:14.176 [2024-09-28 09:01:51.958946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:14.176 [2024-09-28 09:01:51.959018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:14.176 [2024-09-28 09:01:51.959172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:14.176 [2024-09-28 09:01:51.959215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:14.176 09:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.176 09:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:14.176 09:01:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:15.188 [2024-09-28 09:01:52.959295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:15.188 [2024-09-28 09:01:52.959346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:15.189 [2024-09-28 09:01:52.959376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:15.189 [2024-09-28 09:01:52.959388] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:15.189 [2024-09-28 09:01:52.959419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:15.189 [2024-09-28 09:01:52.959465] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:24:15.189 [2024-09-28 09:01:52.959515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.189 [2024-09-28 09:01:52.959535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.189 [2024-09-28 09:01:52.959552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.189 [2024-09-28 09:01:52.959564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.189 [2024-09-28 09:01:52.959575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.189 [2024-09-28 09:01:52.959586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.189 [2024-09-28 09:01:52.959598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.189 [2024-09-28 09:01:52.959608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.189 [2024-09-28 09:01:52.959620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.189 [2024-09-28 09:01:52.959631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:15.189 [2024-09-28 09:01:52.959641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:15.189 [2024-09-28 09:01:52.960265] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:24:15.189 [2024-09-28 09:01:52.961296] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:15.189 [2024-09-28 09:01:52.961477] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:15.189 09:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:15.189 09:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.189 09:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:15.189 09:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.189 09:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:15.189 09:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.189 09:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:15.189 09:01:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.189 09:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:15.189 09:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:15.189 09:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:15.189 09:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:15.189 09:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:15.189 09:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:15.189 09:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.189 09:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.189 09:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:15.189 09:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:15.189 09:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:15.189 09:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.189 09:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:15.189 09:01:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:16.127 09:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:16.127 09:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:16.127 09:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:16.127 09:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.127 09:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:16.127 09:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.127 09:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:16.386 09:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.386 09:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:16.386 09:01:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:17.324 [2024-09-28 09:01:54.968075] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:17.324 [2024-09-28 09:01:54.968117] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:17.324 [2024-09-28 09:01:54.968145] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:17.324 [2024-09-28 09:01:54.974139] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:24:17.324 [2024-09-28 09:01:55.039562] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:17.324 [2024-09-28 09:01:55.039639] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:17.324 [2024-09-28 09:01:55.039702] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:17.324 [2024-09-28 09:01:55.039759] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:24:17.324 [2024-09-28 09:01:55.039790] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:24:17.324 [2024-09-28 09:01:55.046780] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61500002c180 was disconnected and freed. delete nvme_qpair. 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 83337 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 83337 ']' 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 83337 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83337 00:24:17.324 killing process with pid 83337 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83337' 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 83337 00:24:17.324 09:01:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 83337 00:24:18.260 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:18.260 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:18.260 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:24:18.260 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:18.260 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:24:18.260 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:18.260 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:18.260 rmmod nvme_tcp 00:24:18.519 rmmod nvme_fabrics 00:24:18.519 rmmod nvme_keyring 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 83305 ']' 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 83305 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 83305 ']' 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 83305 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83305 00:24:18.519 killing process with pid 83305 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83305' 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 83305 00:24:18.519 09:01:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 83305 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.455 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.714 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:24:19.714 00:24:19.714 real 0m16.121s 00:24:19.714 user 0m26.992s 00:24:19.714 sys 0m2.602s 00:24:19.714 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:19.714 09:01:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:19.714 ************************************ 00:24:19.714 END TEST nvmf_discovery_remove_ifc 00:24:19.714 ************************************ 00:24:19.714 09:01:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:19.714 09:01:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:19.714 09:01:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:19.714 09:01:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.714 ************************************ 00:24:19.714 START TEST nvmf_identify_kernel_target 00:24:19.714 ************************************ 00:24:19.714 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:19.714 * Looking for test storage... 00:24:19.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:19.714 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:19.714 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:24:19.714 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:19.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.974 --rc genhtml_branch_coverage=1 00:24:19.974 --rc genhtml_function_coverage=1 00:24:19.974 --rc genhtml_legend=1 00:24:19.974 --rc geninfo_all_blocks=1 00:24:19.974 --rc geninfo_unexecuted_blocks=1 00:24:19.974 00:24:19.974 ' 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:19.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.974 --rc genhtml_branch_coverage=1 00:24:19.974 --rc genhtml_function_coverage=1 00:24:19.974 --rc genhtml_legend=1 00:24:19.974 --rc geninfo_all_blocks=1 00:24:19.974 --rc geninfo_unexecuted_blocks=1 00:24:19.974 00:24:19.974 ' 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:19.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.974 --rc genhtml_branch_coverage=1 00:24:19.974 --rc genhtml_function_coverage=1 00:24:19.974 --rc genhtml_legend=1 00:24:19.974 --rc geninfo_all_blocks=1 00:24:19.974 --rc geninfo_unexecuted_blocks=1 00:24:19.974 00:24:19.974 ' 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:19.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.974 --rc genhtml_branch_coverage=1 00:24:19.974 --rc genhtml_function_coverage=1 00:24:19.974 --rc genhtml_legend=1 00:24:19.974 --rc geninfo_all_blocks=1 00:24:19.974 --rc geninfo_unexecuted_blocks=1 00:24:19.974 00:24:19.974 ' 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.974 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:19.975 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:19.975 Cannot find device "nvmf_init_br" 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:19.975 Cannot find device "nvmf_init_br2" 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:19.975 Cannot find device "nvmf_tgt_br" 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:19.975 Cannot find device "nvmf_tgt_br2" 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:19.975 Cannot find device "nvmf_init_br" 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:19.975 Cannot find device "nvmf_init_br2" 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:19.975 Cannot find device "nvmf_tgt_br" 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:19.975 Cannot find device "nvmf_tgt_br2" 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:19.975 Cannot find device "nvmf_br" 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:19.975 Cannot find device "nvmf_init_if" 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:19.975 Cannot find device "nvmf_init_if2" 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:19.975 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:24:19.975 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:19.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:19.976 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:24:19.976 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:19.976 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:19.976 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:19.976 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:19.976 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:20.235 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:20.235 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:20.235 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:20.235 09:01:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:20.235 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:20.235 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:24:20.235 00:24:20.235 --- 10.0.0.3 ping statistics --- 00:24:20.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.235 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:20.235 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:20.235 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:24:20.235 00:24:20.235 --- 10.0.0.4 ping statistics --- 00:24:20.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.235 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:20.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:24:20.235 00:24:20.235 --- 10.0.0.1 ping statistics --- 00:24:20.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.235 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:20.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:24:20.235 00:24:20.235 --- 10.0.0.2 ping statistics --- 00:24:20.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.235 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # return 0 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:20.235 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:20.236 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:20.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:20.802 Waiting for block devices as requested 00:24:20.802 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:20.802 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:21.062 No valid GPT data, bailing 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:21.062 No valid GPT data, bailing 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:21.062 09:01:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:21.062 No valid GPT data, bailing 00:24:21.062 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:21.062 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:21.062 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:21.062 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:24:21.062 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:21.062 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:21.062 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:24:21.062 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:21.062 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:21.062 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:21.062 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:24:21.062 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:21.062 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:21.321 No valid GPT data, bailing 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -a 10.0.0.1 -t tcp -s 4420 00:24:21.321 00:24:21.321 Discovery Log Number of Records 2, Generation counter 2 00:24:21.321 =====Discovery Log Entry 0====== 00:24:21.321 trtype: tcp 00:24:21.321 adrfam: ipv4 00:24:21.321 subtype: current discovery subsystem 00:24:21.321 treq: not specified, sq flow control disable supported 00:24:21.321 portid: 1 00:24:21.321 trsvcid: 4420 00:24:21.321 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:21.321 traddr: 10.0.0.1 00:24:21.321 eflags: none 00:24:21.321 sectype: none 00:24:21.321 =====Discovery Log Entry 1====== 00:24:21.321 trtype: tcp 00:24:21.321 adrfam: ipv4 00:24:21.321 subtype: nvme subsystem 00:24:21.321 treq: not specified, sq flow control disable supported 00:24:21.321 portid: 1 00:24:21.321 trsvcid: 4420 00:24:21.321 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:21.321 traddr: 10.0.0.1 00:24:21.321 eflags: none 00:24:21.321 sectype: none 00:24:21.321 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:21.321 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:21.581 ===================================================== 00:24:21.581 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:21.581 ===================================================== 00:24:21.581 Controller Capabilities/Features 00:24:21.581 ================================ 00:24:21.581 Vendor ID: 0000 00:24:21.581 Subsystem Vendor ID: 0000 00:24:21.581 Serial Number: 958ec9d10a94d09bdc88 00:24:21.581 Model Number: Linux 00:24:21.581 Firmware Version: 6.8.9-20 00:24:21.581 Recommended Arb Burst: 0 00:24:21.581 IEEE OUI Identifier: 00 00 00 00:24:21.581 Multi-path I/O 00:24:21.581 May have multiple subsystem ports: No 00:24:21.581 May have multiple controllers: No 00:24:21.581 Associated with SR-IOV VF: No 00:24:21.581 Max Data Transfer Size: Unlimited 00:24:21.581 Max Number of Namespaces: 0 00:24:21.581 Max Number of I/O Queues: 1024 00:24:21.581 NVMe Specification Version (VS): 1.3 00:24:21.581 NVMe Specification Version (Identify): 1.3 00:24:21.581 Maximum Queue Entries: 1024 00:24:21.581 Contiguous Queues Required: No 00:24:21.581 Arbitration Mechanisms Supported 00:24:21.581 Weighted Round Robin: Not Supported 00:24:21.581 Vendor Specific: Not Supported 00:24:21.581 Reset Timeout: 7500 ms 00:24:21.581 Doorbell Stride: 4 bytes 00:24:21.581 NVM Subsystem Reset: Not Supported 00:24:21.581 Command Sets Supported 00:24:21.581 NVM Command Set: Supported 00:24:21.581 Boot Partition: Not Supported 00:24:21.581 Memory Page Size Minimum: 4096 bytes 00:24:21.581 Memory Page Size Maximum: 4096 bytes 00:24:21.581 Persistent Memory Region: Not Supported 00:24:21.581 Optional Asynchronous Events Supported 00:24:21.581 Namespace Attribute Notices: Not Supported 00:24:21.581 Firmware Activation Notices: Not Supported 00:24:21.581 ANA Change Notices: Not Supported 00:24:21.581 PLE Aggregate Log Change Notices: Not Supported 00:24:21.581 LBA Status Info Alert Notices: Not Supported 00:24:21.581 EGE Aggregate Log Change Notices: Not Supported 00:24:21.581 Normal NVM Subsystem Shutdown event: Not Supported 00:24:21.581 Zone Descriptor Change Notices: Not Supported 00:24:21.581 Discovery Log Change Notices: Supported 00:24:21.581 Controller Attributes 00:24:21.581 128-bit Host Identifier: Not Supported 00:24:21.581 Non-Operational Permissive Mode: Not Supported 00:24:21.581 NVM Sets: Not Supported 00:24:21.581 Read Recovery Levels: Not Supported 00:24:21.581 Endurance Groups: Not Supported 00:24:21.581 Predictable Latency Mode: Not Supported 00:24:21.581 Traffic Based Keep ALive: Not Supported 00:24:21.581 Namespace Granularity: Not Supported 00:24:21.581 SQ Associations: Not Supported 00:24:21.581 UUID List: Not Supported 00:24:21.581 Multi-Domain Subsystem: Not Supported 00:24:21.581 Fixed Capacity Management: Not Supported 00:24:21.581 Variable Capacity Management: Not Supported 00:24:21.581 Delete Endurance Group: Not Supported 00:24:21.581 Delete NVM Set: Not Supported 00:24:21.581 Extended LBA Formats Supported: Not Supported 00:24:21.581 Flexible Data Placement Supported: Not Supported 00:24:21.581 00:24:21.581 Controller Memory Buffer Support 00:24:21.581 ================================ 00:24:21.581 Supported: No 00:24:21.581 00:24:21.581 Persistent Memory Region Support 00:24:21.581 ================================ 00:24:21.581 Supported: No 00:24:21.581 00:24:21.581 Admin Command Set Attributes 00:24:21.581 ============================ 00:24:21.581 Security Send/Receive: Not Supported 00:24:21.581 Format NVM: Not Supported 00:24:21.581 Firmware Activate/Download: Not Supported 00:24:21.581 Namespace Management: Not Supported 00:24:21.581 Device Self-Test: Not Supported 00:24:21.581 Directives: Not Supported 00:24:21.581 NVMe-MI: Not Supported 00:24:21.581 Virtualization Management: Not Supported 00:24:21.581 Doorbell Buffer Config: Not Supported 00:24:21.581 Get LBA Status Capability: Not Supported 00:24:21.581 Command & Feature Lockdown Capability: Not Supported 00:24:21.581 Abort Command Limit: 1 00:24:21.581 Async Event Request Limit: 1 00:24:21.581 Number of Firmware Slots: N/A 00:24:21.581 Firmware Slot 1 Read-Only: N/A 00:24:21.581 Firmware Activation Without Reset: N/A 00:24:21.581 Multiple Update Detection Support: N/A 00:24:21.581 Firmware Update Granularity: No Information Provided 00:24:21.581 Per-Namespace SMART Log: No 00:24:21.581 Asymmetric Namespace Access Log Page: Not Supported 00:24:21.581 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:21.581 Command Effects Log Page: Not Supported 00:24:21.581 Get Log Page Extended Data: Supported 00:24:21.581 Telemetry Log Pages: Not Supported 00:24:21.581 Persistent Event Log Pages: Not Supported 00:24:21.581 Supported Log Pages Log Page: May Support 00:24:21.581 Commands Supported & Effects Log Page: Not Supported 00:24:21.581 Feature Identifiers & Effects Log Page:May Support 00:24:21.581 NVMe-MI Commands & Effects Log Page: May Support 00:24:21.581 Data Area 4 for Telemetry Log: Not Supported 00:24:21.581 Error Log Page Entries Supported: 1 00:24:21.581 Keep Alive: Not Supported 00:24:21.581 00:24:21.581 NVM Command Set Attributes 00:24:21.581 ========================== 00:24:21.581 Submission Queue Entry Size 00:24:21.581 Max: 1 00:24:21.581 Min: 1 00:24:21.581 Completion Queue Entry Size 00:24:21.581 Max: 1 00:24:21.581 Min: 1 00:24:21.581 Number of Namespaces: 0 00:24:21.581 Compare Command: Not Supported 00:24:21.581 Write Uncorrectable Command: Not Supported 00:24:21.581 Dataset Management Command: Not Supported 00:24:21.581 Write Zeroes Command: Not Supported 00:24:21.581 Set Features Save Field: Not Supported 00:24:21.581 Reservations: Not Supported 00:24:21.581 Timestamp: Not Supported 00:24:21.581 Copy: Not Supported 00:24:21.581 Volatile Write Cache: Not Present 00:24:21.581 Atomic Write Unit (Normal): 1 00:24:21.581 Atomic Write Unit (PFail): 1 00:24:21.581 Atomic Compare & Write Unit: 1 00:24:21.581 Fused Compare & Write: Not Supported 00:24:21.581 Scatter-Gather List 00:24:21.581 SGL Command Set: Supported 00:24:21.581 SGL Keyed: Not Supported 00:24:21.581 SGL Bit Bucket Descriptor: Not Supported 00:24:21.581 SGL Metadata Pointer: Not Supported 00:24:21.581 Oversized SGL: Not Supported 00:24:21.581 SGL Metadata Address: Not Supported 00:24:21.581 SGL Offset: Supported 00:24:21.581 Transport SGL Data Block: Not Supported 00:24:21.581 Replay Protected Memory Block: Not Supported 00:24:21.581 00:24:21.581 Firmware Slot Information 00:24:21.581 ========================= 00:24:21.581 Active slot: 0 00:24:21.581 00:24:21.581 00:24:21.581 Error Log 00:24:21.581 ========= 00:24:21.581 00:24:21.581 Active Namespaces 00:24:21.581 ================= 00:24:21.581 Discovery Log Page 00:24:21.581 ================== 00:24:21.581 Generation Counter: 2 00:24:21.581 Number of Records: 2 00:24:21.581 Record Format: 0 00:24:21.581 00:24:21.581 Discovery Log Entry 0 00:24:21.581 ---------------------- 00:24:21.581 Transport Type: 3 (TCP) 00:24:21.581 Address Family: 1 (IPv4) 00:24:21.581 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:21.581 Entry Flags: 00:24:21.581 Duplicate Returned Information: 0 00:24:21.581 Explicit Persistent Connection Support for Discovery: 0 00:24:21.581 Transport Requirements: 00:24:21.581 Secure Channel: Not Specified 00:24:21.581 Port ID: 1 (0x0001) 00:24:21.581 Controller ID: 65535 (0xffff) 00:24:21.581 Admin Max SQ Size: 32 00:24:21.581 Transport Service Identifier: 4420 00:24:21.581 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:21.581 Transport Address: 10.0.0.1 00:24:21.581 Discovery Log Entry 1 00:24:21.581 ---------------------- 00:24:21.581 Transport Type: 3 (TCP) 00:24:21.581 Address Family: 1 (IPv4) 00:24:21.581 Subsystem Type: 2 (NVM Subsystem) 00:24:21.581 Entry Flags: 00:24:21.581 Duplicate Returned Information: 0 00:24:21.581 Explicit Persistent Connection Support for Discovery: 0 00:24:21.581 Transport Requirements: 00:24:21.582 Secure Channel: Not Specified 00:24:21.582 Port ID: 1 (0x0001) 00:24:21.582 Controller ID: 65535 (0xffff) 00:24:21.582 Admin Max SQ Size: 32 00:24:21.582 Transport Service Identifier: 4420 00:24:21.582 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:21.582 Transport Address: 10.0.0.1 00:24:21.582 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:21.841 get_feature(0x01) failed 00:24:21.841 get_feature(0x02) failed 00:24:21.841 get_feature(0x04) failed 00:24:21.841 ===================================================== 00:24:21.841 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:21.841 ===================================================== 00:24:21.841 Controller Capabilities/Features 00:24:21.841 ================================ 00:24:21.841 Vendor ID: 0000 00:24:21.841 Subsystem Vendor ID: 0000 00:24:21.841 Serial Number: 696875859f79f1ab5f67 00:24:21.841 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:21.841 Firmware Version: 6.8.9-20 00:24:21.841 Recommended Arb Burst: 6 00:24:21.841 IEEE OUI Identifier: 00 00 00 00:24:21.841 Multi-path I/O 00:24:21.841 May have multiple subsystem ports: Yes 00:24:21.841 May have multiple controllers: Yes 00:24:21.841 Associated with SR-IOV VF: No 00:24:21.841 Max Data Transfer Size: Unlimited 00:24:21.841 Max Number of Namespaces: 1024 00:24:21.841 Max Number of I/O Queues: 128 00:24:21.841 NVMe Specification Version (VS): 1.3 00:24:21.841 NVMe Specification Version (Identify): 1.3 00:24:21.841 Maximum Queue Entries: 1024 00:24:21.841 Contiguous Queues Required: No 00:24:21.842 Arbitration Mechanisms Supported 00:24:21.842 Weighted Round Robin: Not Supported 00:24:21.842 Vendor Specific: Not Supported 00:24:21.842 Reset Timeout: 7500 ms 00:24:21.842 Doorbell Stride: 4 bytes 00:24:21.842 NVM Subsystem Reset: Not Supported 00:24:21.842 Command Sets Supported 00:24:21.842 NVM Command Set: Supported 00:24:21.842 Boot Partition: Not Supported 00:24:21.842 Memory Page Size Minimum: 4096 bytes 00:24:21.842 Memory Page Size Maximum: 4096 bytes 00:24:21.842 Persistent Memory Region: Not Supported 00:24:21.842 Optional Asynchronous Events Supported 00:24:21.842 Namespace Attribute Notices: Supported 00:24:21.842 Firmware Activation Notices: Not Supported 00:24:21.842 ANA Change Notices: Supported 00:24:21.842 PLE Aggregate Log Change Notices: Not Supported 00:24:21.842 LBA Status Info Alert Notices: Not Supported 00:24:21.842 EGE Aggregate Log Change Notices: Not Supported 00:24:21.842 Normal NVM Subsystem Shutdown event: Not Supported 00:24:21.842 Zone Descriptor Change Notices: Not Supported 00:24:21.842 Discovery Log Change Notices: Not Supported 00:24:21.842 Controller Attributes 00:24:21.842 128-bit Host Identifier: Supported 00:24:21.842 Non-Operational Permissive Mode: Not Supported 00:24:21.842 NVM Sets: Not Supported 00:24:21.842 Read Recovery Levels: Not Supported 00:24:21.842 Endurance Groups: Not Supported 00:24:21.842 Predictable Latency Mode: Not Supported 00:24:21.842 Traffic Based Keep ALive: Supported 00:24:21.842 Namespace Granularity: Not Supported 00:24:21.842 SQ Associations: Not Supported 00:24:21.842 UUID List: Not Supported 00:24:21.842 Multi-Domain Subsystem: Not Supported 00:24:21.842 Fixed Capacity Management: Not Supported 00:24:21.842 Variable Capacity Management: Not Supported 00:24:21.842 Delete Endurance Group: Not Supported 00:24:21.842 Delete NVM Set: Not Supported 00:24:21.842 Extended LBA Formats Supported: Not Supported 00:24:21.842 Flexible Data Placement Supported: Not Supported 00:24:21.842 00:24:21.842 Controller Memory Buffer Support 00:24:21.842 ================================ 00:24:21.842 Supported: No 00:24:21.842 00:24:21.842 Persistent Memory Region Support 00:24:21.842 ================================ 00:24:21.842 Supported: No 00:24:21.842 00:24:21.842 Admin Command Set Attributes 00:24:21.842 ============================ 00:24:21.842 Security Send/Receive: Not Supported 00:24:21.842 Format NVM: Not Supported 00:24:21.842 Firmware Activate/Download: Not Supported 00:24:21.842 Namespace Management: Not Supported 00:24:21.842 Device Self-Test: Not Supported 00:24:21.842 Directives: Not Supported 00:24:21.842 NVMe-MI: Not Supported 00:24:21.842 Virtualization Management: Not Supported 00:24:21.842 Doorbell Buffer Config: Not Supported 00:24:21.842 Get LBA Status Capability: Not Supported 00:24:21.842 Command & Feature Lockdown Capability: Not Supported 00:24:21.842 Abort Command Limit: 4 00:24:21.842 Async Event Request Limit: 4 00:24:21.842 Number of Firmware Slots: N/A 00:24:21.842 Firmware Slot 1 Read-Only: N/A 00:24:21.842 Firmware Activation Without Reset: N/A 00:24:21.842 Multiple Update Detection Support: N/A 00:24:21.842 Firmware Update Granularity: No Information Provided 00:24:21.842 Per-Namespace SMART Log: Yes 00:24:21.842 Asymmetric Namespace Access Log Page: Supported 00:24:21.842 ANA Transition Time : 10 sec 00:24:21.842 00:24:21.842 Asymmetric Namespace Access Capabilities 00:24:21.842 ANA Optimized State : Supported 00:24:21.842 ANA Non-Optimized State : Supported 00:24:21.842 ANA Inaccessible State : Supported 00:24:21.842 ANA Persistent Loss State : Supported 00:24:21.842 ANA Change State : Supported 00:24:21.842 ANAGRPID is not changed : No 00:24:21.842 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:21.842 00:24:21.842 ANA Group Identifier Maximum : 128 00:24:21.842 Number of ANA Group Identifiers : 128 00:24:21.842 Max Number of Allowed Namespaces : 1024 00:24:21.842 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:21.842 Command Effects Log Page: Supported 00:24:21.842 Get Log Page Extended Data: Supported 00:24:21.842 Telemetry Log Pages: Not Supported 00:24:21.842 Persistent Event Log Pages: Not Supported 00:24:21.842 Supported Log Pages Log Page: May Support 00:24:21.842 Commands Supported & Effects Log Page: Not Supported 00:24:21.842 Feature Identifiers & Effects Log Page:May Support 00:24:21.842 NVMe-MI Commands & Effects Log Page: May Support 00:24:21.842 Data Area 4 for Telemetry Log: Not Supported 00:24:21.842 Error Log Page Entries Supported: 128 00:24:21.842 Keep Alive: Supported 00:24:21.842 Keep Alive Granularity: 1000 ms 00:24:21.842 00:24:21.842 NVM Command Set Attributes 00:24:21.842 ========================== 00:24:21.842 Submission Queue Entry Size 00:24:21.842 Max: 64 00:24:21.842 Min: 64 00:24:21.842 Completion Queue Entry Size 00:24:21.842 Max: 16 00:24:21.842 Min: 16 00:24:21.842 Number of Namespaces: 1024 00:24:21.842 Compare Command: Not Supported 00:24:21.842 Write Uncorrectable Command: Not Supported 00:24:21.842 Dataset Management Command: Supported 00:24:21.842 Write Zeroes Command: Supported 00:24:21.842 Set Features Save Field: Not Supported 00:24:21.842 Reservations: Not Supported 00:24:21.842 Timestamp: Not Supported 00:24:21.842 Copy: Not Supported 00:24:21.842 Volatile Write Cache: Present 00:24:21.842 Atomic Write Unit (Normal): 1 00:24:21.842 Atomic Write Unit (PFail): 1 00:24:21.842 Atomic Compare & Write Unit: 1 00:24:21.842 Fused Compare & Write: Not Supported 00:24:21.842 Scatter-Gather List 00:24:21.842 SGL Command Set: Supported 00:24:21.842 SGL Keyed: Not Supported 00:24:21.842 SGL Bit Bucket Descriptor: Not Supported 00:24:21.842 SGL Metadata Pointer: Not Supported 00:24:21.842 Oversized SGL: Not Supported 00:24:21.842 SGL Metadata Address: Not Supported 00:24:21.842 SGL Offset: Supported 00:24:21.842 Transport SGL Data Block: Not Supported 00:24:21.842 Replay Protected Memory Block: Not Supported 00:24:21.842 00:24:21.842 Firmware Slot Information 00:24:21.842 ========================= 00:24:21.842 Active slot: 0 00:24:21.842 00:24:21.842 Asymmetric Namespace Access 00:24:21.842 =========================== 00:24:21.842 Change Count : 0 00:24:21.842 Number of ANA Group Descriptors : 1 00:24:21.842 ANA Group Descriptor : 0 00:24:21.842 ANA Group ID : 1 00:24:21.842 Number of NSID Values : 1 00:24:21.842 Change Count : 0 00:24:21.842 ANA State : 1 00:24:21.842 Namespace Identifier : 1 00:24:21.842 00:24:21.842 Commands Supported and Effects 00:24:21.842 ============================== 00:24:21.842 Admin Commands 00:24:21.842 -------------- 00:24:21.842 Get Log Page (02h): Supported 00:24:21.842 Identify (06h): Supported 00:24:21.842 Abort (08h): Supported 00:24:21.842 Set Features (09h): Supported 00:24:21.842 Get Features (0Ah): Supported 00:24:21.842 Asynchronous Event Request (0Ch): Supported 00:24:21.842 Keep Alive (18h): Supported 00:24:21.842 I/O Commands 00:24:21.842 ------------ 00:24:21.842 Flush (00h): Supported 00:24:21.842 Write (01h): Supported LBA-Change 00:24:21.842 Read (02h): Supported 00:24:21.842 Write Zeroes (08h): Supported LBA-Change 00:24:21.842 Dataset Management (09h): Supported 00:24:21.842 00:24:21.842 Error Log 00:24:21.842 ========= 00:24:21.842 Entry: 0 00:24:21.842 Error Count: 0x3 00:24:21.842 Submission Queue Id: 0x0 00:24:21.842 Command Id: 0x5 00:24:21.842 Phase Bit: 0 00:24:21.842 Status Code: 0x2 00:24:21.842 Status Code Type: 0x0 00:24:21.842 Do Not Retry: 1 00:24:21.842 Error Location: 0x28 00:24:21.842 LBA: 0x0 00:24:21.842 Namespace: 0x0 00:24:21.842 Vendor Log Page: 0x0 00:24:21.842 ----------- 00:24:21.842 Entry: 1 00:24:21.842 Error Count: 0x2 00:24:21.842 Submission Queue Id: 0x0 00:24:21.842 Command Id: 0x5 00:24:21.842 Phase Bit: 0 00:24:21.842 Status Code: 0x2 00:24:21.842 Status Code Type: 0x0 00:24:21.842 Do Not Retry: 1 00:24:21.842 Error Location: 0x28 00:24:21.842 LBA: 0x0 00:24:21.842 Namespace: 0x0 00:24:21.842 Vendor Log Page: 0x0 00:24:21.842 ----------- 00:24:21.842 Entry: 2 00:24:21.842 Error Count: 0x1 00:24:21.842 Submission Queue Id: 0x0 00:24:21.842 Command Id: 0x4 00:24:21.842 Phase Bit: 0 00:24:21.842 Status Code: 0x2 00:24:21.842 Status Code Type: 0x0 00:24:21.842 Do Not Retry: 1 00:24:21.842 Error Location: 0x28 00:24:21.842 LBA: 0x0 00:24:21.842 Namespace: 0x0 00:24:21.842 Vendor Log Page: 0x0 00:24:21.842 00:24:21.842 Number of Queues 00:24:21.842 ================ 00:24:21.843 Number of I/O Submission Queues: 128 00:24:21.843 Number of I/O Completion Queues: 128 00:24:21.843 00:24:21.843 ZNS Specific Controller Data 00:24:21.843 ============================ 00:24:21.843 Zone Append Size Limit: 0 00:24:21.843 00:24:21.843 00:24:21.843 Active Namespaces 00:24:21.843 ================= 00:24:21.843 get_feature(0x05) failed 00:24:21.843 Namespace ID:1 00:24:21.843 Command Set Identifier: NVM (00h) 00:24:21.843 Deallocate: Supported 00:24:21.843 Deallocated/Unwritten Error: Not Supported 00:24:21.843 Deallocated Read Value: Unknown 00:24:21.843 Deallocate in Write Zeroes: Not Supported 00:24:21.843 Deallocated Guard Field: 0xFFFF 00:24:21.843 Flush: Supported 00:24:21.843 Reservation: Not Supported 00:24:21.843 Namespace Sharing Capabilities: Multiple Controllers 00:24:21.843 Size (in LBAs): 1310720 (5GiB) 00:24:21.843 Capacity (in LBAs): 1310720 (5GiB) 00:24:21.843 Utilization (in LBAs): 1310720 (5GiB) 00:24:21.843 UUID: 00fd7a0d-1dac-4acb-85c0-f3326d7e21b2 00:24:21.843 Thin Provisioning: Not Supported 00:24:21.843 Per-NS Atomic Units: Yes 00:24:21.843 Atomic Boundary Size (Normal): 0 00:24:21.843 Atomic Boundary Size (PFail): 0 00:24:21.843 Atomic Boundary Offset: 0 00:24:21.843 NGUID/EUI64 Never Reused: No 00:24:21.843 ANA group ID: 1 00:24:21.843 Namespace Write Protected: No 00:24:21.843 Number of LBA Formats: 1 00:24:21.843 Current LBA Format: LBA Format #00 00:24:21.843 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:24:21.843 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:21.843 rmmod nvme_tcp 00:24:21.843 rmmod nvme_fabrics 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:21.843 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:24:22.101 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:22.101 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:22.101 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:22.101 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:22.101 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:22.101 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:22.101 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:22.102 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:22.102 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:22.102 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:22.102 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:22.102 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:22.102 09:01:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:22.102 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:22.102 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:22.102 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:22.102 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.102 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.102 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:22.102 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:24:22.102 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:22.102 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:22.102 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:24:22.102 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:22.102 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:22.359 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:22.359 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:22.360 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:24:22.360 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:24:22.360 09:02:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:22.927 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:22.927 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:23.186 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:23.186 00:24:23.186 real 0m3.479s 00:24:23.186 user 0m1.280s 00:24:23.186 sys 0m1.556s 00:24:23.186 09:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:23.186 09:02:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.186 ************************************ 00:24:23.186 END TEST nvmf_identify_kernel_target 00:24:23.186 ************************************ 00:24:23.186 09:02:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:23.186 09:02:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:23.186 09:02:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:23.186 09:02:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.186 ************************************ 00:24:23.186 START TEST nvmf_auth_host 00:24:23.186 ************************************ 00:24:23.186 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:23.186 * Looking for test storage... 00:24:23.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:23.186 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:23.186 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:24:23.186 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:23.446 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:23.446 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.446 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.446 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.446 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.446 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.446 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:23.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.447 --rc genhtml_branch_coverage=1 00:24:23.447 --rc genhtml_function_coverage=1 00:24:23.447 --rc genhtml_legend=1 00:24:23.447 --rc geninfo_all_blocks=1 00:24:23.447 --rc geninfo_unexecuted_blocks=1 00:24:23.447 00:24:23.447 ' 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:23.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.447 --rc genhtml_branch_coverage=1 00:24:23.447 --rc genhtml_function_coverage=1 00:24:23.447 --rc genhtml_legend=1 00:24:23.447 --rc geninfo_all_blocks=1 00:24:23.447 --rc geninfo_unexecuted_blocks=1 00:24:23.447 00:24:23.447 ' 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:23.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.447 --rc genhtml_branch_coverage=1 00:24:23.447 --rc genhtml_function_coverage=1 00:24:23.447 --rc genhtml_legend=1 00:24:23.447 --rc geninfo_all_blocks=1 00:24:23.447 --rc geninfo_unexecuted_blocks=1 00:24:23.447 00:24:23.447 ' 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:23.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.447 --rc genhtml_branch_coverage=1 00:24:23.447 --rc genhtml_function_coverage=1 00:24:23.447 --rc genhtml_legend=1 00:24:23.447 --rc geninfo_all_blocks=1 00:24:23.447 --rc geninfo_unexecuted_blocks=1 00:24:23.447 00:24:23.447 ' 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:23.447 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.447 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:23.448 Cannot find device "nvmf_init_br" 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:23.448 Cannot find device "nvmf_init_br2" 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:23.448 Cannot find device "nvmf_tgt_br" 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:23.448 Cannot find device "nvmf_tgt_br2" 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:23.448 Cannot find device "nvmf_init_br" 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:23.448 Cannot find device "nvmf_init_br2" 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:23.448 Cannot find device "nvmf_tgt_br" 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:23.448 Cannot find device "nvmf_tgt_br2" 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:23.448 Cannot find device "nvmf_br" 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:23.448 Cannot find device "nvmf_init_if" 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:23.448 Cannot find device "nvmf_init_if2" 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:23.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:23.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:23.448 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:23.707 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:23.708 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:23.708 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:24:23.708 00:24:23.708 --- 10.0.0.3 ping statistics --- 00:24:23.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.708 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:23.708 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:23.708 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:24:23.708 00:24:23.708 --- 10.0.0.4 ping statistics --- 00:24:23.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.708 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:23.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:24:23.708 00:24:23.708 --- 10.0.0.1 ping statistics --- 00:24:23.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.708 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:23.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:24:23.708 00:24:23.708 --- 10.0.0.2 ping statistics --- 00:24:23.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.708 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # return 0 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=84351 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 84351 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:23.708 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 84351 ']' 00:24:23.967 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.967 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:23.967 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.967 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:23.967 09:02:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=73f673da2ed4316875b7c5b1302b1b3e 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.gEh 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 73f673da2ed4316875b7c5b1302b1b3e 0 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 73f673da2ed4316875b7c5b1302b1b3e 0 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=73f673da2ed4316875b7c5b1302b1b3e 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:24:24.903 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.gEh 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.gEh 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.gEh 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=4004a35766d48984c7cb1b61cdec7c88f1e4bca672aadf8ce7c34b4f40d1072b 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.MPo 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 4004a35766d48984c7cb1b61cdec7c88f1e4bca672aadf8ce7c34b4f40d1072b 3 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 4004a35766d48984c7cb1b61cdec7c88f1e4bca672aadf8ce7c34b4f40d1072b 3 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=4004a35766d48984c7cb1b61cdec7c88f1e4bca672aadf8ce7c34b4f40d1072b 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.MPo 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.MPo 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.MPo 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=dca2863dfc855968feb2e49f9d03eaecccfc87f13a034a8e 00:24:24.904 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.Jeh 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key dca2863dfc855968feb2e49f9d03eaecccfc87f13a034a8e 0 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 dca2863dfc855968feb2e49f9d03eaecccfc87f13a034a8e 0 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=dca2863dfc855968feb2e49f9d03eaecccfc87f13a034a8e 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.Jeh 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.Jeh 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Jeh 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=8fd7ef0c8af33afc15d2b20a6a00b5993045bccef4b964eb 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.LNH 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 8fd7ef0c8af33afc15d2b20a6a00b5993045bccef4b964eb 2 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 8fd7ef0c8af33afc15d2b20a6a00b5993045bccef4b964eb 2 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=8fd7ef0c8af33afc15d2b20a6a00b5993045bccef4b964eb 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:24:25.163 09:02:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.LNH 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.LNH 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.LNH 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=1ee0b1d53ebaa405fc94eaa8b79d4c9d 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.nia 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 1ee0b1d53ebaa405fc94eaa8b79d4c9d 1 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 1ee0b1d53ebaa405fc94eaa8b79d4c9d 1 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=1ee0b1d53ebaa405fc94eaa8b79d4c9d 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.nia 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.nia 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.nia 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=0468877e0b7e9a07ac2a0615998b5ffd 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.ZE3 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 0468877e0b7e9a07ac2a0615998b5ffd 1 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 0468877e0b7e9a07ac2a0615998b5ffd 1 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:24:25.163 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=0468877e0b7e9a07ac2a0615998b5ffd 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.ZE3 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.ZE3 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ZE3 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=699b95bcb115b1e9aac7c51d5e267ae4eae301f225a156f6 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.bQN 00:24:25.164 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 699b95bcb115b1e9aac7c51d5e267ae4eae301f225a156f6 2 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 699b95bcb115b1e9aac7c51d5e267ae4eae301f225a156f6 2 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=699b95bcb115b1e9aac7c51d5e267ae4eae301f225a156f6 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.bQN 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.bQN 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.bQN 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=81f71a85a4cfbc6348034edada348f44 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.qPv 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 81f71a85a4cfbc6348034edada348f44 0 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 81f71a85a4cfbc6348034edada348f44 0 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=81f71a85a4cfbc6348034edada348f44 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.qPv 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.qPv 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.qPv 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=71b85c256de40a61c5da4b55ae0051ef40e5465dd84002ccd9b56056d59166bf 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.K1u 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 71b85c256de40a61c5da4b55ae0051ef40e5465dd84002ccd9b56056d59166bf 3 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 71b85c256de40a61c5da4b55ae0051ef40e5465dd84002ccd9b56056d59166bf 3 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=71b85c256de40a61c5da4b55ae0051ef40e5465dd84002ccd9b56056d59166bf 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.K1u 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.K1u 00:24:25.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.K1u 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 84351 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 84351 ']' 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:25.423 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gEh 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.MPo ]] 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MPo 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Jeh 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.LNH ]] 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LNH 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.nia 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ZE3 ]] 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZE3 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.682 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.bQN 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.qPv ]] 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.qPv 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.K1u 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:25.941 09:02:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:26.201 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:26.201 Waiting for block devices as requested 00:24:26.201 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:26.459 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:27.026 No valid GPT data, bailing 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:27.026 No valid GPT data, bailing 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:27.026 09:02:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:27.026 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:24:27.026 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:27.026 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:27.026 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:24:27.026 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:24:27.026 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:27.026 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:27.026 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:24:27.026 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:27.026 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:27.286 No valid GPT data, bailing 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:27.286 No valid GPT data, bailing 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -a 10.0.0.1 -t tcp -s 4420 00:24:27.286 00:24:27.286 Discovery Log Number of Records 2, Generation counter 2 00:24:27.286 =====Discovery Log Entry 0====== 00:24:27.286 trtype: tcp 00:24:27.286 adrfam: ipv4 00:24:27.286 subtype: current discovery subsystem 00:24:27.286 treq: not specified, sq flow control disable supported 00:24:27.286 portid: 1 00:24:27.286 trsvcid: 4420 00:24:27.286 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:27.286 traddr: 10.0.0.1 00:24:27.286 eflags: none 00:24:27.286 sectype: none 00:24:27.286 =====Discovery Log Entry 1====== 00:24:27.286 trtype: tcp 00:24:27.286 adrfam: ipv4 00:24:27.286 subtype: nvme subsystem 00:24:27.286 treq: not specified, sq flow control disable supported 00:24:27.286 portid: 1 00:24:27.286 trsvcid: 4420 00:24:27.286 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:27.286 traddr: 10.0.0.1 00:24:27.286 eflags: none 00:24:27.286 sectype: none 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.286 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:27.546 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.547 nvme0n1 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.547 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.806 nvme0n1 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.806 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.807 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.065 nvme0n1 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:28.065 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.066 09:02:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.066 nvme0n1 00:24:28.066 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.066 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.066 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.066 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.066 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.066 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.066 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.066 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.066 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.066 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.325 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.325 nvme0n1 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.326 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.584 nvme0n1 00:24:28.584 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.584 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.584 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.584 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.584 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.584 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.584 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.584 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.584 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.584 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.584 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.585 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:28.585 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.585 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:28.585 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.585 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.585 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:28.585 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:28.585 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:28.585 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:28.585 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.585 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.843 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.102 nvme0n1 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:29.102 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:29.103 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:29.103 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.103 09:02:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.361 nvme0n1 00:24:29.361 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.361 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.361 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.362 nvme0n1 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.362 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.621 nvme0n1 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.621 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.887 nvme0n1 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.887 09:02:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:30.454 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:30.454 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:30.454 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.455 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.714 nvme0n1 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.714 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.973 nvme0n1 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:30.973 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.974 09:02:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.233 nvme0n1 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.233 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.492 nvme0n1 00:24:31.492 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.492 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.492 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.493 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.752 nvme0n1 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.752 09:02:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.672 nvme0n1 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.672 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.673 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.673 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:33.673 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:33.673 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:33.673 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.673 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.673 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:33.673 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.673 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:33.673 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:33.673 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:33.673 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.673 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.673 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.942 nvme0n1 00:24:33.942 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.201 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.201 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.201 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.201 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.201 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.201 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.201 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.201 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.201 09:02:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.201 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.461 nvme0n1 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.461 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.029 nvme0n1 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:35.029 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.030 09:02:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.289 nvme0n1 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:35.289 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.290 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.857 nvme0n1 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.857 09:02:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.424 nvme0n1 00:24:36.424 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.424 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.424 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.425 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.994 nvme0n1 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:36.994 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.995 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.254 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.254 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:37.254 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:37.254 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:37.254 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.254 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.254 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:37.254 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.254 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:37.254 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:37.254 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:37.254 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:37.254 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.254 09:02:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.513 nvme0n1 00:24:37.513 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.513 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.513 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.513 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.513 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:37.771 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.772 09:02:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.340 nvme0n1 00:24:38.340 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.340 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.340 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.340 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.340 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.341 nvme0n1 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.341 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.601 nvme0n1 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.601 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.602 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.861 nvme0n1 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.861 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.862 nvme0n1 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.862 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:39.122 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.123 09:02:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.123 nvme0n1 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.123 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.383 nvme0n1 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:39.383 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:39.384 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:39.384 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.384 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.384 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:39.384 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.384 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:39.384 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:39.384 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:39.384 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.384 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.384 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.643 nvme0n1 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.643 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.901 nvme0n1 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.901 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.902 nvme0n1 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.902 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.161 09:02:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.161 nvme0n1 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:40.161 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:40.162 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:40.162 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.162 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.420 nvme0n1 00:24:40.420 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.420 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.420 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.420 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.420 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.420 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.420 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.420 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.420 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.420 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.420 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.420 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.420 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:40.420 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.421 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.679 nvme0n1 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.679 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.938 nvme0n1 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.938 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.198 09:02:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.198 nvme0n1 00:24:41.198 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.198 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.198 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.198 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.198 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.198 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.198 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.198 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.198 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.198 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.458 nvme0n1 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.458 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.718 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.977 nvme0n1 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:41.977 09:02:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.235 nvme0n1 00:24:42.235 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.235 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.235 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.235 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.235 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.235 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.235 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.235 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.235 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.235 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.493 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.752 nvme0n1 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.752 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.011 nvme0n1 00:24:43.011 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.011 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.011 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.011 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.011 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.011 09:02:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.269 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.527 nvme0n1 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:43.527 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.528 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.094 nvme0n1 00:24:44.094 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.094 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.094 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.094 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.094 09:02:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.094 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.095 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.661 nvme0n1 00:24:44.661 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.661 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.661 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.661 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.661 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.661 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.919 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:44.920 09:02:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.485 nvme0n1 00:24:45.485 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.485 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.486 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.053 nvme0n1 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.053 09:02:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.620 nvme0n1 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.620 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.879 nvme0n1 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:46.879 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:46.880 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.880 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.880 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.880 nvme0n1 00:24:46.880 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.880 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.880 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.880 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.880 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.880 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.138 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.138 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.138 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.139 09:02:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.139 nvme0n1 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.139 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.399 nvme0n1 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.399 nvme0n1 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.399 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.659 nvme0n1 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.659 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.919 nvme0n1 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.919 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.178 nvme0n1 00:24:48.178 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.178 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.178 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.178 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.178 09:02:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.178 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.437 nvme0n1 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:48.437 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.438 nvme0n1 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.438 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.697 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.697 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.697 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.698 nvme0n1 00:24:48.698 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.957 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.957 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.957 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.957 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.957 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.957 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.957 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.957 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.958 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.217 nvme0n1 00:24:49.217 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.217 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.217 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.217 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.217 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.217 09:02:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.217 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.217 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.217 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.217 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.217 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.217 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.217 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:49.217 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.217 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.217 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.217 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:49.217 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.218 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.477 nvme0n1 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.477 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.736 nvme0n1 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.736 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.995 nvme0n1 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.996 09:02:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.256 nvme0n1 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.256 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.257 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.826 nvme0n1 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:50.826 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.085 nvme0n1 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:51.085 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:51.086 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:51.086 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.086 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.086 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:51.086 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:51.086 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.086 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:51.086 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.086 09:02:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.086 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.344 nvme0n1 00:24:51.344 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.344 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.344 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.344 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.344 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.344 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.613 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.908 nvme0n1 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzNmNjczZGEyZWQ0MzE2ODc1YjdjNWIxMzAyYjFiM2WGEWe2: 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: ]] 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDAwNGEzNTc2NmQ0ODk4NGM3Y2IxYjYxY2RlYzdjODhmMWU0YmNhNjcyYWFkZjhjZTdjMzRiNGY0MGQxMDcyYs3nsuc=: 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.908 09:02:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.487 nvme0n1 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.487 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.488 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.488 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:52.488 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:52.488 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:52.488 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.488 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.488 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:52.488 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.488 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:52.488 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:52.488 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:52.488 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:52.488 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.488 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.055 nvme0n1 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.056 09:02:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 nvme0n1 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.624 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Njk5Yjk1YmNiMTE1YjFlOWFhYzdjNTFkNWUyNjdhZTRlYWUzMDFmMjI1YTE1NmY21ecf8g==: 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: ]] 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODFmNzFhODVhNGNmYmM2MzQ4MDM0ZWRhZGEzNDhmNDSjp9Ei: 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.625 09:02:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.193 nvme0n1 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzFiODVjMjU2ZGU0MGE2MWM1ZGE0YjU1YWUwMDUxZWY0MGU1NDY1ZGQ4NDAwMmNjZDliNTYwNTZkNTkxNjZiZjARj+0=: 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.193 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.762 nvme0n1 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.762 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.021 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.021 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:55.021 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:55.021 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:55.021 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:55.021 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.021 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.021 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:55.021 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.021 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.022 request: 00:24:55.022 { 00:24:55.022 "name": "nvme0", 00:24:55.022 "trtype": "tcp", 00:24:55.022 "traddr": "10.0.0.1", 00:24:55.022 "adrfam": "ipv4", 00:24:55.022 "trsvcid": "4420", 00:24:55.022 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:55.022 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:55.022 "prchk_reftag": false, 00:24:55.022 "prchk_guard": false, 00:24:55.022 "hdgst": false, 00:24:55.022 "ddgst": false, 00:24:55.022 "allow_unrecognized_csi": false, 00:24:55.022 "method": "bdev_nvme_attach_controller", 00:24:55.022 "req_id": 1 00:24:55.022 } 00:24:55.022 Got JSON-RPC error response 00:24:55.022 response: 00:24:55.022 { 00:24:55.022 "code": -5, 00:24:55.022 "message": "Input/output error" 00:24:55.022 } 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.022 request: 00:24:55.022 { 00:24:55.022 "name": "nvme0", 00:24:55.022 "trtype": "tcp", 00:24:55.022 "traddr": "10.0.0.1", 00:24:55.022 "adrfam": "ipv4", 00:24:55.022 "trsvcid": "4420", 00:24:55.022 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:55.022 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:55.022 "prchk_reftag": false, 00:24:55.022 "prchk_guard": false, 00:24:55.022 "hdgst": false, 00:24:55.022 "ddgst": false, 00:24:55.022 "dhchap_key": "key2", 00:24:55.022 "allow_unrecognized_csi": false, 00:24:55.022 "method": "bdev_nvme_attach_controller", 00:24:55.022 "req_id": 1 00:24:55.022 } 00:24:55.022 Got JSON-RPC error response 00:24:55.022 response: 00:24:55.022 { 00:24:55.022 "code": -5, 00:24:55.022 "message": "Input/output error" 00:24:55.022 } 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:55.022 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:55.023 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:55.023 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:55.023 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:55.023 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:55.023 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.023 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:55.023 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.023 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:55.023 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.023 09:02:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.023 request: 00:24:55.023 { 00:24:55.023 "name": "nvme0", 00:24:55.023 "trtype": "tcp", 00:24:55.023 "traddr": "10.0.0.1", 00:24:55.023 "adrfam": "ipv4", 00:24:55.023 "trsvcid": "4420", 00:24:55.023 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:55.023 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:55.023 "prchk_reftag": false, 00:24:55.023 "prchk_guard": false, 00:24:55.023 "hdgst": false, 00:24:55.023 "ddgst": false, 00:24:55.023 "dhchap_key": "key1", 00:24:55.023 "dhchap_ctrlr_key": "ckey2", 00:24:55.023 "allow_unrecognized_csi": false, 00:24:55.023 "method": "bdev_nvme_attach_controller", 00:24:55.023 "req_id": 1 00:24:55.023 } 00:24:55.023 Got JSON-RPC error response 00:24:55.023 response: 00:24:55.023 { 00:24:55.023 "code": -5, 00:24:55.023 "message": "Input/output error" 00:24:55.023 } 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.023 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.282 nvme0n1 00:24:55.282 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.282 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.283 request: 00:24:55.283 { 00:24:55.283 "name": "nvme0", 00:24:55.283 "dhchap_key": "key1", 00:24:55.283 "dhchap_ctrlr_key": "ckey2", 00:24:55.283 "method": "bdev_nvme_set_keys", 00:24:55.283 "req_id": 1 00:24:55.283 } 00:24:55.283 Got JSON-RPC error response 00:24:55.283 response: 00:24:55.283 { 00:24:55.283 "code": -13, 00:24:55.283 "message": "Permission denied" 00:24:55.283 } 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:55.283 09:02:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhMjg2M2RmYzg1NTk2OGZlYjJlNDlmOWQwM2VhZWNjY2ZjODdmMTNhMDM0YThlSOMcyA==: 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: ]] 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZkN2VmMGM4YWYzM2FmYzE1ZDJiMjBhNmEwMGI1OTkzMDQ1YmNjZWY0Yjk2NGViorz1ew==: 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.660 nvme0n1 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.660 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWVlMGIxZDUzZWJhYTQwNWZjOTRlYWE4Yjc5ZDRjOWQvGnTY: 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: ]] 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDQ2ODg3N2UwYjdlOWEwN2FjMmEwNjE1OTk4YjVmZmTg9QmT: 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.661 request: 00:24:56.661 { 00:24:56.661 "name": "nvme0", 00:24:56.661 "dhchap_key": "key2", 00:24:56.661 "dhchap_ctrlr_key": "ckey1", 00:24:56.661 "method": "bdev_nvme_set_keys", 00:24:56.661 "req_id": 1 00:24:56.661 } 00:24:56.661 Got JSON-RPC error response 00:24:56.661 response: 00:24:56.661 { 00:24:56.661 "code": -13, 00:24:56.661 "message": "Permission denied" 00:24:56.661 } 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:56.661 09:02:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:57.598 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.598 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.598 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:57.598 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.598 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.598 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:57.598 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:57.598 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:57.598 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:57.598 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:57.598 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.857 rmmod nvme_tcp 00:24:57.857 rmmod nvme_fabrics 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 84351 ']' 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 84351 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 84351 ']' 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 84351 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84351 00:24:57.857 killing process with pid 84351 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84351' 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 84351 00:24:57.857 09:02:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 84351 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:58.797 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:24:59.057 09:02:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:59.625 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:59.885 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:59.885 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:59.885 09:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.gEh /tmp/spdk.key-null.Jeh /tmp/spdk.key-sha256.nia /tmp/spdk.key-sha384.bQN /tmp/spdk.key-sha512.K1u /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:24:59.885 09:02:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:00.144 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:00.403 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:00.403 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:00.403 00:25:00.403 real 0m37.130s 00:25:00.403 user 0m34.345s 00:25:00.403 sys 0m4.167s 00:25:00.403 09:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:00.403 09:02:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.403 ************************************ 00:25:00.403 END TEST nvmf_auth_host 00:25:00.403 ************************************ 00:25:00.403 09:02:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:00.403 09:02:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:00.403 09:02:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:00.403 09:02:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:00.403 09:02:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.403 ************************************ 00:25:00.403 START TEST nvmf_digest 00:25:00.403 ************************************ 00:25:00.403 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:00.403 * Looking for test storage... 00:25:00.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:00.403 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:00.403 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:25:00.403 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:00.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.663 --rc genhtml_branch_coverage=1 00:25:00.663 --rc genhtml_function_coverage=1 00:25:00.663 --rc genhtml_legend=1 00:25:00.663 --rc geninfo_all_blocks=1 00:25:00.663 --rc geninfo_unexecuted_blocks=1 00:25:00.663 00:25:00.663 ' 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:00.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.663 --rc genhtml_branch_coverage=1 00:25:00.663 --rc genhtml_function_coverage=1 00:25:00.663 --rc genhtml_legend=1 00:25:00.663 --rc geninfo_all_blocks=1 00:25:00.663 --rc geninfo_unexecuted_blocks=1 00:25:00.663 00:25:00.663 ' 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:00.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.663 --rc genhtml_branch_coverage=1 00:25:00.663 --rc genhtml_function_coverage=1 00:25:00.663 --rc genhtml_legend=1 00:25:00.663 --rc geninfo_all_blocks=1 00:25:00.663 --rc geninfo_unexecuted_blocks=1 00:25:00.663 00:25:00.663 ' 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:00.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.663 --rc genhtml_branch_coverage=1 00:25:00.663 --rc genhtml_function_coverage=1 00:25:00.663 --rc genhtml_legend=1 00:25:00.663 --rc geninfo_all_blocks=1 00:25:00.663 --rc geninfo_unexecuted_blocks=1 00:25:00.663 00:25:00.663 ' 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.663 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:00.664 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@456 -- # nvmf_veth_init 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:00.664 Cannot find device "nvmf_init_br" 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:00.664 Cannot find device "nvmf_init_br2" 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:00.664 Cannot find device "nvmf_tgt_br" 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:00.664 Cannot find device "nvmf_tgt_br2" 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:00.664 Cannot find device "nvmf_init_br" 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:00.664 Cannot find device "nvmf_init_br2" 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:00.664 Cannot find device "nvmf_tgt_br" 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:00.664 Cannot find device "nvmf_tgt_br2" 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:00.664 Cannot find device "nvmf_br" 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:00.664 Cannot find device "nvmf_init_if" 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:00.664 Cannot find device "nvmf_init_if2" 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:00.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:00.664 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:00.664 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:00.924 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:00.924 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:25:00.924 00:25:00.924 --- 10.0.0.3 ping statistics --- 00:25:00.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.924 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:00.924 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:00.924 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:25:00.924 00:25:00.924 --- 10.0.0.4 ping statistics --- 00:25:00.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.924 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:00.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:25:00.924 00:25:00.924 --- 10.0.0.1 ping statistics --- 00:25:00.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.924 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:00.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:25:00.924 00:25:00.924 --- 10.0.0.2 ping statistics --- 00:25:00.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.924 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # return 0 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:00.924 ************************************ 00:25:00.924 START TEST nvmf_digest_clean 00:25:00.924 ************************************ 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=86013 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 86013 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 86013 ']' 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:00.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:00.924 09:02:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:01.184 [2024-09-28 09:02:39.011657] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:01.184 [2024-09-28 09:02:39.011841] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.443 [2024-09-28 09:02:39.182456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.443 [2024-09-28 09:02:39.368932] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.443 [2024-09-28 09:02:39.369010] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.443 [2024-09-28 09:02:39.369029] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.443 [2024-09-28 09:02:39.369045] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.443 [2024-09-28 09:02:39.369056] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.443 [2024-09-28 09:02:39.369093] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.041 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:02.041 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:02.041 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:02.041 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:02.041 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:02.300 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.300 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:02.300 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:02.300 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:02.300 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.300 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:02.300 [2024-09-28 09:02:40.200466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:02.559 null0 00:25:02.559 [2024-09-28 09:02:40.301478] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.559 [2024-09-28 09:02:40.325662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86051 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86051 /var/tmp/bperf.sock 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 86051 ']' 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:02.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:02.559 09:02:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:02.559 [2024-09-28 09:02:40.443397] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:02.559 [2024-09-28 09:02:40.443562] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86051 ] 00:25:02.892 [2024-09-28 09:02:40.616449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.892 [2024-09-28 09:02:40.824632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.459 09:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:03.459 09:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:03.459 09:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:03.459 09:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:03.459 09:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:03.716 [2024-09-28 09:02:41.698796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:03.974 09:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.974 09:02:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:04.233 nvme0n1 00:25:04.233 09:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:04.233 09:02:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:04.233 Running I/O for 2 seconds... 00:25:06.547 14859.00 IOPS, 58.04 MiB/s 14859.00 IOPS, 58.04 MiB/s 00:25:06.547 Latency(us) 00:25:06.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.547 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:06.547 nvme0n1 : 2.01 14852.70 58.02 0.00 0.00 8611.01 8162.21 22163.08 00:25:06.547 =================================================================================================================== 00:25:06.547 Total : 14852.70 58.02 0.00 0.00 8611.01 8162.21 22163.08 00:25:06.547 { 00:25:06.547 "results": [ 00:25:06.547 { 00:25:06.547 "job": "nvme0n1", 00:25:06.547 "core_mask": "0x2", 00:25:06.547 "workload": "randread", 00:25:06.547 "status": "finished", 00:25:06.547 "queue_depth": 128, 00:25:06.547 "io_size": 4096, 00:25:06.547 "runtime": 2.009466, 00:25:06.547 "iops": 14852.702160673532, 00:25:06.547 "mibps": 58.018367815130986, 00:25:06.547 "io_failed": 0, 00:25:06.547 "io_timeout": 0, 00:25:06.547 "avg_latency_us": 8611.009524041596, 00:25:06.547 "min_latency_us": 8162.210909090909, 00:25:06.547 "max_latency_us": 22163.083636363637 00:25:06.547 } 00:25:06.547 ], 00:25:06.547 "core_count": 1 00:25:06.547 } 00:25:06.547 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:06.547 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:06.547 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:06.547 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:06.547 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:06.547 | select(.opcode=="crc32c") 00:25:06.547 | "\(.module_name) \(.executed)"' 00:25:06.547 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:06.547 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:06.547 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:06.547 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:06.547 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86051 00:25:06.548 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 86051 ']' 00:25:06.548 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 86051 00:25:06.548 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:06.548 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:06.548 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86051 00:25:06.548 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:06.548 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:06.548 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86051' 00:25:06.548 killing process with pid 86051 00:25:06.548 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 86051 00:25:06.548 Received shutdown signal, test time was about 2.000000 seconds 00:25:06.548 00:25:06.548 Latency(us) 00:25:06.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.548 =================================================================================================================== 00:25:06.548 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.548 09:02:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 86051 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86118 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86118 /var/tmp/bperf.sock 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 86118 ']' 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:07.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:07.484 09:02:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:07.484 [2024-09-28 09:02:45.476936] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:07.484 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:07.484 Zero copy mechanism will not be used. 00:25:07.484 [2024-09-28 09:02:45.477082] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86118 ] 00:25:07.743 [2024-09-28 09:02:45.632643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.002 [2024-09-28 09:02:45.782544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.570 09:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:08.570 09:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:08.570 09:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:08.570 09:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:08.570 09:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:08.829 [2024-09-28 09:02:46.776596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:09.087 09:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.087 09:02:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.347 nvme0n1 00:25:09.347 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:09.347 09:02:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:09.347 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:09.347 Zero copy mechanism will not be used. 00:25:09.347 Running I/O for 2 seconds... 00:25:11.659 7264.00 IOPS, 908.00 MiB/s 7216.00 IOPS, 902.00 MiB/s 00:25:11.659 Latency(us) 00:25:11.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.659 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:11.659 nvme0n1 : 2.00 7213.93 901.74 0.00 0.00 2214.62 2010.76 3961.95 00:25:11.659 =================================================================================================================== 00:25:11.659 Total : 7213.93 901.74 0.00 0.00 2214.62 2010.76 3961.95 00:25:11.659 { 00:25:11.659 "results": [ 00:25:11.659 { 00:25:11.659 "job": "nvme0n1", 00:25:11.660 "core_mask": "0x2", 00:25:11.660 "workload": "randread", 00:25:11.660 "status": "finished", 00:25:11.660 "queue_depth": 16, 00:25:11.660 "io_size": 131072, 00:25:11.660 "runtime": 2.002793, 00:25:11.660 "iops": 7213.925752686374, 00:25:11.660 "mibps": 901.7407190857967, 00:25:11.660 "io_failed": 0, 00:25:11.660 "io_timeout": 0, 00:25:11.660 "avg_latency_us": 2214.6162327594884, 00:25:11.660 "min_latency_us": 2010.7636363636364, 00:25:11.660 "max_latency_us": 3961.949090909091 00:25:11.660 } 00:25:11.660 ], 00:25:11.660 "core_count": 1 00:25:11.660 } 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:11.660 | select(.opcode=="crc32c") 00:25:11.660 | "\(.module_name) \(.executed)"' 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86118 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 86118 ']' 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 86118 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86118 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:11.660 killing process with pid 86118 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86118' 00:25:11.660 Received shutdown signal, test time was about 2.000000 seconds 00:25:11.660 00:25:11.660 Latency(us) 00:25:11.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.660 =================================================================================================================== 00:25:11.660 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 86118 00:25:11.660 09:02:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 86118 00:25:13.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86186 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86186 /var/tmp/bperf.sock 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 86186 ']' 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:13.050 09:02:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:13.050 [2024-09-28 09:02:50.710845] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:13.050 [2024-09-28 09:02:50.711031] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86186 ] 00:25:13.050 [2024-09-28 09:02:50.879076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.346 [2024-09-28 09:02:51.045668] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.923 09:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:13.923 09:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:13.923 09:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:13.923 09:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:13.923 09:02:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:14.183 [2024-09-28 09:02:52.018407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:14.183 09:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:14.183 09:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:14.442 nvme0n1 00:25:14.442 09:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:14.442 09:02:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:14.701 Running I/O for 2 seconds... 00:25:16.576 15876.00 IOPS, 62.02 MiB/s 15812.00 IOPS, 61.77 MiB/s 00:25:16.577 Latency(us) 00:25:16.577 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.577 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:16.577 nvme0n1 : 2.01 15834.51 61.85 0.00 0.00 8076.51 2502.28 19541.64 00:25:16.577 =================================================================================================================== 00:25:16.577 Total : 15834.51 61.85 0.00 0.00 8076.51 2502.28 19541.64 00:25:16.577 { 00:25:16.577 "results": [ 00:25:16.577 { 00:25:16.577 "job": "nvme0n1", 00:25:16.577 "core_mask": "0x2", 00:25:16.577 "workload": "randwrite", 00:25:16.577 "status": "finished", 00:25:16.577 "queue_depth": 128, 00:25:16.577 "io_size": 4096, 00:25:16.577 "runtime": 2.00524, 00:25:16.577 "iops": 15834.513574434981, 00:25:16.577 "mibps": 61.853568650136644, 00:25:16.577 "io_failed": 0, 00:25:16.577 "io_timeout": 0, 00:25:16.577 "avg_latency_us": 8076.513092833093, 00:25:16.577 "min_latency_us": 2502.2836363636366, 00:25:16.577 "max_latency_us": 19541.643636363635 00:25:16.577 } 00:25:16.577 ], 00:25:16.577 "core_count": 1 00:25:16.577 } 00:25:16.836 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:16.836 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:16.836 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:16.836 | select(.opcode=="crc32c") 00:25:16.836 | "\(.module_name) \(.executed)"' 00:25:16.836 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:16.836 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86186 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 86186 ']' 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 86186 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86186 00:25:17.095 killing process with pid 86186 00:25:17.095 Received shutdown signal, test time was about 2.000000 seconds 00:25:17.095 00:25:17.095 Latency(us) 00:25:17.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.095 =================================================================================================================== 00:25:17.095 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86186' 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 86186 00:25:17.095 09:02:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 86186 00:25:18.033 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86253 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86253 /var/tmp/bperf.sock 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 86253 ']' 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:18.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:18.034 09:02:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:18.034 [2024-09-28 09:02:55.883009] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:18.034 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:18.034 Zero copy mechanism will not be used. 00:25:18.034 [2024-09-28 09:02:55.883185] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86253 ] 00:25:18.293 [2024-09-28 09:02:56.053022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.293 [2024-09-28 09:02:56.211924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.861 09:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:18.861 09:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:25:18.861 09:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:18.861 09:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:18.861 09:02:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:19.429 [2024-09-28 09:02:57.147303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:19.429 09:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.429 09:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.689 nvme0n1 00:25:19.689 09:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:19.689 09:02:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:19.689 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:19.689 Zero copy mechanism will not be used. 00:25:19.689 Running I/O for 2 seconds... 00:25:22.005 5880.00 IOPS, 735.00 MiB/s 5890.50 IOPS, 736.31 MiB/s 00:25:22.005 Latency(us) 00:25:22.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.005 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:22.005 nvme0n1 : 2.00 5889.72 736.21 0.00 0.00 2710.06 1869.27 7804.74 00:25:22.005 =================================================================================================================== 00:25:22.005 Total : 5889.72 736.21 0.00 0.00 2710.06 1869.27 7804.74 00:25:22.005 { 00:25:22.005 "results": [ 00:25:22.005 { 00:25:22.005 "job": "nvme0n1", 00:25:22.005 "core_mask": "0x2", 00:25:22.005 "workload": "randwrite", 00:25:22.005 "status": "finished", 00:25:22.005 "queue_depth": 16, 00:25:22.005 "io_size": 131072, 00:25:22.005 "runtime": 2.00434, 00:25:22.005 "iops": 5889.719309099255, 00:25:22.005 "mibps": 736.2149136374069, 00:25:22.005 "io_failed": 0, 00:25:22.005 "io_timeout": 0, 00:25:22.005 "avg_latency_us": 2710.06317631204, 00:25:22.005 "min_latency_us": 1869.2654545454545, 00:25:22.005 "max_latency_us": 7804.741818181818 00:25:22.005 } 00:25:22.005 ], 00:25:22.005 "core_count": 1 00:25:22.005 } 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:22.006 | select(.opcode=="crc32c") 00:25:22.006 | "\(.module_name) \(.executed)"' 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86253 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 86253 ']' 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 86253 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86253 00:25:22.006 killing process with pid 86253 00:25:22.006 Received shutdown signal, test time was about 2.000000 seconds 00:25:22.006 00:25:22.006 Latency(us) 00:25:22.006 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.006 =================================================================================================================== 00:25:22.006 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86253' 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 86253 00:25:22.006 09:02:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 86253 00:25:23.385 09:03:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 86013 00:25:23.385 09:03:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 86013 ']' 00:25:23.385 09:03:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 86013 00:25:23.385 09:03:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:25:23.385 09:03:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:23.385 09:03:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86013 00:25:23.385 killing process with pid 86013 00:25:23.385 09:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:23.385 09:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:23.385 09:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86013' 00:25:23.385 09:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 86013 00:25:23.385 09:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 86013 00:25:24.323 ************************************ 00:25:24.323 END TEST nvmf_digest_clean 00:25:24.323 ************************************ 00:25:24.323 00:25:24.323 real 0m23.071s 00:25:24.323 user 0m44.188s 00:25:24.323 sys 0m4.706s 00:25:24.323 09:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:24.323 09:03:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:24.323 09:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:24.323 09:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:24.323 09:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:24.323 09:03:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:24.323 ************************************ 00:25:24.323 START TEST nvmf_digest_error 00:25:24.323 ************************************ 00:25:24.323 09:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:25:24.323 09:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:24.323 09:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:24.323 09:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:24.323 09:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.323 09:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=86355 00:25:24.323 09:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 86355 00:25:24.323 09:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86355 ']' 00:25:24.323 09:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.323 09:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:24.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.323 09:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:24.323 09:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.323 09:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:24.323 09:03:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:24.323 [2024-09-28 09:03:02.140404] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:24.323 [2024-09-28 09:03:02.140577] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.323 [2024-09-28 09:03:02.316205] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.583 [2024-09-28 09:03:02.482357] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.583 [2024-09-28 09:03:02.482424] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.583 [2024-09-28 09:03:02.482473] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.583 [2024-09-28 09:03:02.482489] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.583 [2024-09-28 09:03:02.482501] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.583 [2024-09-28 09:03:02.482537] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.151 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:25.151 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:25.151 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:25.151 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:25.151 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:25.151 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.151 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:25.151 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.151 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:25.151 [2024-09-28 09:03:03.071429] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:25.151 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.151 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:25.151 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:25.151 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.151 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:25.411 [2024-09-28 09:03:03.229005] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:25.411 null0 00:25:25.411 [2024-09-28 09:03:03.327541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.411 [2024-09-28 09:03:03.351725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86387 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86387 /var/tmp/bperf.sock 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86387 ']' 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:25.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:25.411 09:03:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:25.671 [2024-09-28 09:03:03.470684] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:25.671 [2024-09-28 09:03:03.470876] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86387 ] 00:25:25.671 [2024-09-28 09:03:03.641283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.930 [2024-09-28 09:03:03.841415] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.189 [2024-09-28 09:03:03.991551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:26.447 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.447 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:26.447 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:26.447 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:26.706 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:26.706 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.706 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:26.706 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.706 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:26.706 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:26.964 nvme0n1 00:25:26.964 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:26.964 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.964 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:26.965 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.965 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:26.965 09:03:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:27.224 Running I/O for 2 seconds... 00:25:27.224 [2024-09-28 09:03:05.065527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.224 [2024-09-28 09:03:05.065620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.224 [2024-09-28 09:03:05.065661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.224 [2024-09-28 09:03:05.083649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.224 [2024-09-28 09:03:05.083695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.224 [2024-09-28 09:03:05.083715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.224 [2024-09-28 09:03:05.101770] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.224 [2024-09-28 09:03:05.101832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.224 [2024-09-28 09:03:05.101852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.224 [2024-09-28 09:03:05.120284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.224 [2024-09-28 09:03:05.120346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.224 [2024-09-28 09:03:05.120367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.224 [2024-09-28 09:03:05.137602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.224 [2024-09-28 09:03:05.137666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.224 [2024-09-28 09:03:05.137684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.224 [2024-09-28 09:03:05.155051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.224 [2024-09-28 09:03:05.155116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.224 [2024-09-28 09:03:05.155134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.224 [2024-09-28 09:03:05.171894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.224 [2024-09-28 09:03:05.171952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.224 [2024-09-28 09:03:05.171972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.224 [2024-09-28 09:03:05.188937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.224 [2024-09-28 09:03:05.189007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.224 [2024-09-28 09:03:05.189025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.224 [2024-09-28 09:03:05.206042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.224 [2024-09-28 09:03:05.206104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.224 [2024-09-28 09:03:05.206123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.224151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.224211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.224233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.241658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.241721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.241739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.258895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.258958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.258976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.276135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.276192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.276215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.293379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.293442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.293460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.310494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.310557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.310574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.327446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.327503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.327524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.344551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.344615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.344633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.361709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.361772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.361790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.379003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.379060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.379081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.395971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.396035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.396052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.413091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.413157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.413190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.430088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.430145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.430165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.447188] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.447247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.447267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.484 [2024-09-28 09:03:05.464103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.484 [2024-09-28 09:03:05.464166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.484 [2024-09-28 09:03:05.464184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.482613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.482674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.482695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.499928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.499990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.500008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.516969] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.517036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.517056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.533877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.533933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.533953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.550935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.550993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.551013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.567790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.567862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.567880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.584756] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.584845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.584869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.601688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.601744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.601764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.618968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.619031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.619049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.635935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.635990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.636010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.652921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.652981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.653002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.670323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.670384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.670402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.687388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.687445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.687467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.704370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.704425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.704446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.744 [2024-09-28 09:03:05.721399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:27.744 [2024-09-28 09:03:05.721460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.744 [2024-09-28 09:03:05.721477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.739033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.739093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.739119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.757114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.757235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.757271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.775112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.775176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.775195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.792740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.792847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.792871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.809714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.809772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.809794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.826722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.826785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.826802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.844814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.844897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.844920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.865352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.865416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.865434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.884326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.884382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.884402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.902284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.902340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.902360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.919539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.919600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.919618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.936543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.936599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.936618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.953709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.953766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.953786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.970763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.970835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.970855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.004 [2024-09-28 09:03:05.987723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.004 [2024-09-28 09:03:05.987779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.004 [2024-09-28 09:03:05.987801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.264 [2024-09-28 09:03:06.006038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.264 [2024-09-28 09:03:06.006095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.264 [2024-09-28 09:03:06.006116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.264 [2024-09-28 09:03:06.023109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.264 [2024-09-28 09:03:06.023172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.264 [2024-09-28 09:03:06.023190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.264 14422.00 IOPS, 56.34 MiB/s [2024-09-28 09:03:06.040282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.264 [2024-09-28 09:03:06.040337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.264 [2024-09-28 09:03:06.040358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.264 [2024-09-28 09:03:06.057376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.264 [2024-09-28 09:03:06.057431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.264 [2024-09-28 09:03:06.057452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.264 [2024-09-28 09:03:06.074383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.264 [2024-09-28 09:03:06.074445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.264 [2024-09-28 09:03:06.074463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.264 [2024-09-28 09:03:06.091762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.264 [2024-09-28 09:03:06.091828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.264 [2024-09-28 09:03:06.091849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.264 [2024-09-28 09:03:06.108587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.264 [2024-09-28 09:03:06.108643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.264 [2024-09-28 09:03:06.108665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.264 [2024-09-28 09:03:06.127189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.264 [2024-09-28 09:03:06.127254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.264 [2024-09-28 09:03:06.127272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.264 [2024-09-28 09:03:06.147293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.264 [2024-09-28 09:03:06.147336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.264 [2024-09-28 09:03:06.147356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.264 [2024-09-28 09:03:06.173954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.264 [2024-09-28 09:03:06.173997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.264 [2024-09-28 09:03:06.174020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.264 [2024-09-28 09:03:06.192051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.264 [2024-09-28 09:03:06.192095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.264 [2024-09-28 09:03:06.192115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.264 [2024-09-28 09:03:06.210178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.264 [2024-09-28 09:03:06.210226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.264 [2024-09-28 09:03:06.210244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.264 [2024-09-28 09:03:06.228386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.264 [2024-09-28 09:03:06.228428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.264 [2024-09-28 09:03:06.228448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.264 [2024-09-28 09:03:06.246841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.264 [2024-09-28 09:03:06.246888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.264 [2024-09-28 09:03:06.246905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.266475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.266525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.266544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.285202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.285261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:24099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.285282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.303556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.303608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.303626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.321645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.321688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.321708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.340457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.340505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.340535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.358869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.358917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.358934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.377152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.377243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.377263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.394601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.394665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.394682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.412288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.412352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.412370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.429528] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.429586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.429606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.446752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.446815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.446846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.463956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.464021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.464039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.481233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.481291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.481313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.498402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.498464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.498482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.524 [2024-09-28 09:03:06.515957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.524 [2024-09-28 09:03:06.516021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.524 [2024-09-28 09:03:06.516054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.783 [2024-09-28 09:03:06.534107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.783 [2024-09-28 09:03:06.534167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.783 [2024-09-28 09:03:06.534188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.783 [2024-09-28 09:03:06.551444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.783 [2024-09-28 09:03:06.551508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.783 [2024-09-28 09:03:06.551526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.783 [2024-09-28 09:03:06.568615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.783 [2024-09-28 09:03:06.568680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.783 [2024-09-28 09:03:06.568697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.783 [2024-09-28 09:03:06.586513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.783 [2024-09-28 09:03:06.586572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.783 [2024-09-28 09:03:06.586592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.784 [2024-09-28 09:03:06.603967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.784 [2024-09-28 09:03:06.604037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.784 [2024-09-28 09:03:06.604054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.784 [2024-09-28 09:03:06.621459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.784 [2024-09-28 09:03:06.621524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.784 [2024-09-28 09:03:06.621542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.784 [2024-09-28 09:03:06.638863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.784 [2024-09-28 09:03:06.638921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.784 [2024-09-28 09:03:06.638941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.784 [2024-09-28 09:03:06.656235] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.784 [2024-09-28 09:03:06.656298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.784 [2024-09-28 09:03:06.656315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.784 [2024-09-28 09:03:06.673376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.784 [2024-09-28 09:03:06.673434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.784 [2024-09-28 09:03:06.673450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.784 [2024-09-28 09:03:06.690747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.784 [2024-09-28 09:03:06.690806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.784 [2024-09-28 09:03:06.690833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.784 [2024-09-28 09:03:06.708174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.784 [2024-09-28 09:03:06.708248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.784 [2024-09-28 09:03:06.708264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.784 [2024-09-28 09:03:06.725465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.784 [2024-09-28 09:03:06.725522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.784 [2024-09-28 09:03:06.725539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.784 [2024-09-28 09:03:06.742558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.784 [2024-09-28 09:03:06.742616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.784 [2024-09-28 09:03:06.742632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.784 [2024-09-28 09:03:06.759755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.784 [2024-09-28 09:03:06.759830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.784 [2024-09-28 09:03:06.759859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.784 [2024-09-28 09:03:06.777830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:28.784 [2024-09-28 09:03:06.777944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.784 [2024-09-28 09:03:06.777967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.043 [2024-09-28 09:03:06.799407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:29.043 [2024-09-28 09:03:06.799466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.043 [2024-09-28 09:03:06.799484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.043 [2024-09-28 09:03:06.816742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:29.043 [2024-09-28 09:03:06.816852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.043 [2024-09-28 09:03:06.816873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.043 [2024-09-28 09:03:06.834021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:29.043 [2024-09-28 09:03:06.834078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.043 [2024-09-28 09:03:06.834095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.043 [2024-09-28 09:03:06.851207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:29.043 [2024-09-28 09:03:06.851264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.043 [2024-09-28 09:03:06.851280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.043 [2024-09-28 09:03:06.869618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:29.043 [2024-09-28 09:03:06.869675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.043 [2024-09-28 09:03:06.869692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.043 [2024-09-28 09:03:06.889993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:29.043 [2024-09-28 09:03:06.890054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.043 [2024-09-28 09:03:06.890073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.043 [2024-09-28 09:03:06.908431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:29.043 [2024-09-28 09:03:06.908488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.043 [2024-09-28 09:03:06.908504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.043 [2024-09-28 09:03:06.926011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:29.043 [2024-09-28 09:03:06.926068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.043 [2024-09-28 09:03:06.926085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.043 [2024-09-28 09:03:06.943200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:29.043 [2024-09-28 09:03:06.943255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.043 [2024-09-28 09:03:06.943271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.043 [2024-09-28 09:03:06.960346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:29.043 [2024-09-28 09:03:06.960402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.043 [2024-09-28 09:03:06.960419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.043 [2024-09-28 09:03:06.977589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:29.043 [2024-09-28 09:03:06.977646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.043 [2024-09-28 09:03:06.977662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.043 [2024-09-28 09:03:06.994904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:29.043 [2024-09-28 09:03:06.994960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.043 [2024-09-28 09:03:06.994976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.044 [2024-09-28 09:03:07.011962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:29.044 [2024-09-28 09:03:07.012018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.044 [2024-09-28 09:03:07.012034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.044 [2024-09-28 09:03:07.029204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:29.044 [2024-09-28 09:03:07.029260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:29.044 [2024-09-28 09:03:07.029276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:29.303 14295.00 IOPS, 55.84 MiB/s 00:25:29.303 Latency(us) 00:25:29.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.303 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:29.303 nvme0n1 : 2.01 14312.91 55.91 0.00 0.00 8935.98 8221.79 35270.28 00:25:29.303 =================================================================================================================== 00:25:29.303 Total : 14312.91 55.91 0.00 0.00 8935.98 8221.79 35270.28 00:25:29.303 { 00:25:29.303 "results": [ 00:25:29.303 { 00:25:29.303 "job": "nvme0n1", 00:25:29.303 "core_mask": "0x2", 00:25:29.303 "workload": "randread", 00:25:29.303 "status": "finished", 00:25:29.303 "queue_depth": 128, 00:25:29.303 "io_size": 4096, 00:25:29.303 "runtime": 2.006441, 00:25:29.303 "iops": 14312.905288518326, 00:25:29.303 "mibps": 55.90978628327471, 00:25:29.303 "io_failed": 0, 00:25:29.303 "io_timeout": 0, 00:25:29.303 "avg_latency_us": 8935.982493589703, 00:25:29.303 "min_latency_us": 8221.789090909091, 00:25:29.303 "max_latency_us": 35270.28363636364 00:25:29.303 } 00:25:29.303 ], 00:25:29.303 "core_count": 1 00:25:29.303 } 00:25:29.303 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:29.303 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:29.303 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:29.303 | .driver_specific 00:25:29.303 | .nvme_error 00:25:29.303 | .status_code 00:25:29.303 | .command_transient_transport_error' 00:25:29.303 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:29.562 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 112 > 0 )) 00:25:29.562 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86387 00:25:29.562 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86387 ']' 00:25:29.562 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86387 00:25:29.562 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:29.562 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:29.562 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86387 00:25:29.562 killing process with pid 86387 00:25:29.562 Received shutdown signal, test time was about 2.000000 seconds 00:25:29.562 00:25:29.562 Latency(us) 00:25:29.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.562 =================================================================================================================== 00:25:29.562 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:29.562 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:29.562 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:29.562 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86387' 00:25:29.562 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86387 00:25:29.562 09:03:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86387 00:25:30.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:30.500 09:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:30.500 09:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:30.500 09:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:30.500 09:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:30.500 09:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:30.500 09:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86454 00:25:30.500 09:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86454 /var/tmp/bperf.sock 00:25:30.500 09:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:30.500 09:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86454 ']' 00:25:30.500 09:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:30.500 09:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:30.500 09:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:30.500 09:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:30.500 09:03:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:30.500 [2024-09-28 09:03:08.350355] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:30.500 [2024-09-28 09:03:08.350775] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86454 ] 00:25:30.500 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:30.500 Zero copy mechanism will not be used. 00:25:30.760 [2024-09-28 09:03:08.506629] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.760 [2024-09-28 09:03:08.656271] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.018 [2024-09-28 09:03:08.812398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:31.586 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:31.586 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:31.586 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:31.586 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:31.586 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:31.586 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.586 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:31.586 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.586 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:31.586 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:31.844 nvme0n1 00:25:31.844 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:31.844 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.845 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:31.845 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.845 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:31.845 09:03:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:32.104 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:32.104 Zero copy mechanism will not be used. 00:25:32.104 Running I/O for 2 seconds... 00:25:32.104 [2024-09-28 09:03:09.932501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.104 [2024-09-28 09:03:09.932640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.104 [2024-09-28 09:03:09.932704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.104 [2024-09-28 09:03:09.938381] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.104 [2024-09-28 09:03:09.938435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.104 [2024-09-28 09:03:09.938462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.104 [2024-09-28 09:03:09.944451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.104 [2024-09-28 09:03:09.944759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.104 [2024-09-28 09:03:09.944939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.104 [2024-09-28 09:03:09.950781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.104 [2024-09-28 09:03:09.951119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.104 [2024-09-28 09:03:09.951295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.104 [2024-09-28 09:03:09.956662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.104 [2024-09-28 09:03:09.957020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.104 [2024-09-28 09:03:09.957395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.104 [2024-09-28 09:03:09.962710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.104 [2024-09-28 09:03:09.963079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.104 [2024-09-28 09:03:09.963247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.104 [2024-09-28 09:03:09.968574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.104 [2024-09-28 09:03:09.968936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:09.969300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:09.975048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:09.975360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:09.975723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:09.980858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:09.981194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:09.981485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:09.986632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:09.986958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:09.987222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:09.992354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:09.992642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:09.993047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:09.998360] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:09.998611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:09.998717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.004108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.004273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.004394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.009696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.009998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.010162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.015343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.015601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.015707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.021203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.021565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.021746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.027533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.027801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.027940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.033237] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.033516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.033645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.039086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.039417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.039750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.045330] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.045631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.046011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.051282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.051583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.051934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.057074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.057402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.057504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.062541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.062813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.062937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.068075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.068338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.068470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.073485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.073747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.073898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.079036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.079334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.079601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.084611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.084986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.085312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.090718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.091062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.091367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.105 [2024-09-28 09:03:10.096900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.105 [2024-09-28 09:03:10.097241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.105 [2024-09-28 09:03:10.097559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.365 [2024-09-28 09:03:10.103420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.365 [2024-09-28 09:03:10.103763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.365 [2024-09-28 09:03:10.104062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.365 [2024-09-28 09:03:10.109295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.365 [2024-09-28 09:03:10.109400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.365 [2024-09-28 09:03:10.109499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.365 [2024-09-28 09:03:10.114505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.365 [2024-09-28 09:03:10.114751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.365 [2024-09-28 09:03:10.114919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.365 [2024-09-28 09:03:10.120090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.365 [2024-09-28 09:03:10.120383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.365 [2024-09-28 09:03:10.120658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.365 [2024-09-28 09:03:10.125828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.365 [2024-09-28 09:03:10.126086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.365 [2024-09-28 09:03:10.126345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.365 [2024-09-28 09:03:10.131512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.365 [2024-09-28 09:03:10.131813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.365 [2024-09-28 09:03:10.132199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.365 [2024-09-28 09:03:10.137347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.137666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.138050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.143186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.143485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.143766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.148993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.149281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.149618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.154707] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.155066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.155344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.160517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.160858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.161139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.166386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.166674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.166938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.172024] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.172310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.172565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.178068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.178352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.178478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.183520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.183859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.183976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.188926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.189237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.189351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.194391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.194656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.194767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.199769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.200098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.200386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.205394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.205675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.206038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.210922] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.211037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.211127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.215984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.216108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.216135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.221071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.221157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.221191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.225841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.225915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.225937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.230451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.230522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.230542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.235168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.235236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.235256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.239894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.239956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.239979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.244526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.244588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.244612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.249510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.249579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.249599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.254581] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.254794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.254834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.259705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.259767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.259790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.264372] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.264434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.264456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.269108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.269199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.269235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.273926] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.366 [2024-09-28 09:03:10.273994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.366 [2024-09-28 09:03:10.274029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.366 [2024-09-28 09:03:10.278786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.278858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.278881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.283497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.283558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.283580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.288189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.288257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.288276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.292986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.293061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.293084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.297785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.297876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.297901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.302595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.302657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.302680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.307346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.307419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.307439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.312095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.312166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.312185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.316758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.317017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.317049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.321940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.322001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.322023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.326622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.326693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.326722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.331613] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.331684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.331704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.336476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.336666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.336696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.341676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.341739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.341761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.346499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.346571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.346590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.351374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.351571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.351601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.367 [2024-09-28 09:03:10.356904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.367 [2024-09-28 09:03:10.356980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.367 [2024-09-28 09:03:10.357010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.627 [2024-09-28 09:03:10.362393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.627 [2024-09-28 09:03:10.362599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.627 [2024-09-28 09:03:10.362630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.367813] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.367884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.367904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.372485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.372553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.372572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.377353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.377554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.377584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.382498] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.382560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.382583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.387279] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.387350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.387370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.392057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.392118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.392140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.396885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.396950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.396974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.401561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.401629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.401648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.406386] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.406453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.406473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.411195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.411393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.411423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.416145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.416207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.416229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.421374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.421443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.421462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.426144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.426214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.426234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.430873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.430934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.430956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.435570] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.435632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.435656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.440378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.440448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.440468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.445359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.445570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.445595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.450470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.450532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.450554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.455232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.455293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.455315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.460047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.460116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.460135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.464599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.464667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.464687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.469501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.469563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.469589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.474254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.474316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.474338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.478985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.479052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.479071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.483686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.483753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.483772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.488508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.488570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.628 [2024-09-28 09:03:10.488592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.628 [2024-09-28 09:03:10.493569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.628 [2024-09-28 09:03:10.493631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.493653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.498493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.498699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.498724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.503564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.503634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.503653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.508368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.508430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.508452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.513253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.513450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.513482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.518325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.518547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.518706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.523663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.523887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.524022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.528897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.529105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.529254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.534198] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.534403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.534540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.539419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.539660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.539794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.544844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.545051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.545202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.550250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.550465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.550603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.555649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.555890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.556082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.560986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.561209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.561338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.566259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.566483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.566616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.571706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.571941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.572122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.576937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.577161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.577299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.582223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.582414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.582677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.587571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.587789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.587937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.592768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.593012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.593038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.597929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.598142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.598283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.603075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.603280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.603413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.608356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.608573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.608715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.613651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.613879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.614122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.629 [2024-09-28 09:03:10.619497] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.629 [2024-09-28 09:03:10.619719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.629 [2024-09-28 09:03:10.619992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.892 [2024-09-28 09:03:10.625484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.892 [2024-09-28 09:03:10.625693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.892 [2024-09-28 09:03:10.625861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.892 [2024-09-28 09:03:10.631224] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.892 [2024-09-28 09:03:10.631423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.892 [2024-09-28 09:03:10.631567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.892 [2024-09-28 09:03:10.636345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.892 [2024-09-28 09:03:10.636564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.892 [2024-09-28 09:03:10.636704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.892 [2024-09-28 09:03:10.641778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.892 [2024-09-28 09:03:10.642022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.892 [2024-09-28 09:03:10.642137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.892 [2024-09-28 09:03:10.646992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.892 [2024-09-28 09:03:10.647206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.892 [2024-09-28 09:03:10.647337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.892 [2024-09-28 09:03:10.652193] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.892 [2024-09-28 09:03:10.652410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.892 [2024-09-28 09:03:10.652541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.892 [2024-09-28 09:03:10.657715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.892 [2024-09-28 09:03:10.657963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.892 [2024-09-28 09:03:10.658173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.892 [2024-09-28 09:03:10.662971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.892 [2024-09-28 09:03:10.663186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.892 [2024-09-28 09:03:10.663328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.892 [2024-09-28 09:03:10.668215] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.892 [2024-09-28 09:03:10.668421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.892 [2024-09-28 09:03:10.668553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.892 [2024-09-28 09:03:10.673567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.892 [2024-09-28 09:03:10.673782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.892 [2024-09-28 09:03:10.674020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.892 [2024-09-28 09:03:10.678827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.892 [2024-09-28 09:03:10.679026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.892 [2024-09-28 09:03:10.679167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.892 [2024-09-28 09:03:10.683863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.892 [2024-09-28 09:03:10.684079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.892 [2024-09-28 09:03:10.684221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.892 [2024-09-28 09:03:10.689116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.892 [2024-09-28 09:03:10.689337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.892 [2024-09-28 09:03:10.689468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.694489] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.694697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.694861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.699787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.700006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.700221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.705206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.705413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.705557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.710526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.710736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.710910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.715913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.716115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.716342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.721398] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.721595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.721754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.726725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.726970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.727102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.731902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.732114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.732294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.737212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.737397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.737422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.742349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.742554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.742700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.747715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.747953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.748116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.752977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.753202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.753333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.758298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.758505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.758633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.763603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.763821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.763955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.768947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.769133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.769277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.774183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.774400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.774542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.779406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.779602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.779745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.784729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.784959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.785211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.790126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.790316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.790450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.795454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.795663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.795795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.800721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.800998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.801139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.806039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.893 [2024-09-28 09:03:10.806240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.893 [2024-09-28 09:03:10.806386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.893 [2024-09-28 09:03:10.811223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.811428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.894 [2024-09-28 09:03:10.811607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.894 [2024-09-28 09:03:10.816470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.816663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.894 [2024-09-28 09:03:10.816831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.894 [2024-09-28 09:03:10.821888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.822080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.894 [2024-09-28 09:03:10.822283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.894 [2024-09-28 09:03:10.827133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.827339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.894 [2024-09-28 09:03:10.827482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.894 [2024-09-28 09:03:10.832287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.832495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.894 [2024-09-28 09:03:10.832624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.894 [2024-09-28 09:03:10.837651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.837716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.894 [2024-09-28 09:03:10.837734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.894 [2024-09-28 09:03:10.842548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.842611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.894 [2024-09-28 09:03:10.842629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.894 [2024-09-28 09:03:10.847423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.847485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.894 [2024-09-28 09:03:10.847504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.894 [2024-09-28 09:03:10.852209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.852413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.894 [2024-09-28 09:03:10.852436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.894 [2024-09-28 09:03:10.857371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.857588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.894 [2024-09-28 09:03:10.857729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.894 [2024-09-28 09:03:10.862692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.862930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.894 [2024-09-28 09:03:10.863066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.894 [2024-09-28 09:03:10.868069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.868264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.894 [2024-09-28 09:03:10.868441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.894 [2024-09-28 09:03:10.873408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.873614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.894 [2024-09-28 09:03:10.873796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.894 [2024-09-28 09:03:10.878760] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.878980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.894 [2024-09-28 09:03:10.879175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.894 [2024-09-28 09:03:10.884974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:32.894 [2024-09-28 09:03:10.885183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.885406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.891879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.892141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.892290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.898534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.898797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.898845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.905272] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.905490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.905639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.911610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.911838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.912011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.917342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.917557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.917695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.923046] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.923267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.923401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.212 5797.00 IOPS, 724.62 MiB/s [2024-09-28 09:03:10.929704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.929963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.930185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.935345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.935540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.935683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.940466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.940684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.940876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.946151] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.946367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.946508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.951590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.951782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.951966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.957537] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.957759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.957925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.963622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.963877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.964018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.969946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.970177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.970351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.975931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.976155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.976436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.981853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.982075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.982102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.987488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.987686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.987710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.992736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.992852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.992876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:10.997764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:10.997851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:10.997872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:11.002635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:11.002697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:11.002731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:11.007535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:11.007739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:11.007763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:11.012641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:11.012704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:11.012722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:11.017569] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:11.017631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.212 [2024-09-28 09:03:11.017650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.212 [2024-09-28 09:03:11.022456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.212 [2024-09-28 09:03:11.022518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.022538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.027258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.027462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.027486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.032199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.032261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.032280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.036912] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.036977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.036996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.041586] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.041649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.041668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.046429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.046632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.046657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.051566] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.051787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.052011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.056750] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.057040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.057272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.062134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.062328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.062468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.067403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.067620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.067750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.072653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.072906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.073038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.077789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.078041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.078179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.083078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.083282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.083412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.088213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.088409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.088434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.093407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.093624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.093766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.098765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.099003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.099169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.104183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.104380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.104539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.109546] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.109762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.109789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.114469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.114531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.114549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.119280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.119342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.119360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.124114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.124163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.124181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.128764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.128896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.128917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.133740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.133986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.134012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.138864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.139060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.139202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.144069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.144274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.144419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.149358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.149566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.149698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.154697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.154949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.213 [2024-09-28 09:03:11.155172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.213 [2024-09-28 09:03:11.160143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.213 [2024-09-28 09:03:11.160337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.214 [2024-09-28 09:03:11.160480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.214 [2024-09-28 09:03:11.165467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.214 [2024-09-28 09:03:11.165674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.214 [2024-09-28 09:03:11.165817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.214 [2024-09-28 09:03:11.170799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.214 [2024-09-28 09:03:11.171018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.214 [2024-09-28 09:03:11.171199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.214 [2024-09-28 09:03:11.176055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.214 [2024-09-28 09:03:11.176259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.214 [2024-09-28 09:03:11.176476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.214 [2024-09-28 09:03:11.181661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.214 [2024-09-28 09:03:11.181901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.214 [2024-09-28 09:03:11.182039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.214 [2024-09-28 09:03:11.187341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.214 [2024-09-28 09:03:11.187406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.214 [2024-09-28 09:03:11.187425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.214 [2024-09-28 09:03:11.192428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.214 [2024-09-28 09:03:11.192491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.214 [2024-09-28 09:03:11.192510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.214 [2024-09-28 09:03:11.197708] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.214 [2024-09-28 09:03:11.197929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.214 [2024-09-28 09:03:11.197955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.214 [2024-09-28 09:03:11.203799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.214 [2024-09-28 09:03:11.203918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.214 [2024-09-28 09:03:11.203943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.474 [2024-09-28 09:03:11.209766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.474 [2024-09-28 09:03:11.210015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.474 [2024-09-28 09:03:11.210041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.474 [2024-09-28 09:03:11.215794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.474 [2024-09-28 09:03:11.216090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.474 [2024-09-28 09:03:11.216259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.474 [2024-09-28 09:03:11.221523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.474 [2024-09-28 09:03:11.221745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.474 [2024-09-28 09:03:11.221909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.474 [2024-09-28 09:03:11.227207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.227436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.227619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.232847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.233053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.233296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.238466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.238684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.238920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.243993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.244208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.244389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.249430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.249637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.249772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.254918] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.255138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.255272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.260541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.260741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.260765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.265681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.265913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.265940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.270765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.270874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.270896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.275829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.275908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.275929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.281233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.281436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.281462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.286408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.286473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.286493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.291448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.291513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.291532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.296527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.296730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.296754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.302012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.302076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.302096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.306885] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.306948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.306967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.311768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.311876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.311898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.316949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.317016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.317037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.322037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.322103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.322122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.327206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.327269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.327288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.332091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.332154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.332173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.336890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.336955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.336975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.341684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.341748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.341767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.346967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.347031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.347050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.351761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.351856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.351877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.356690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.356922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.356948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.475 [2024-09-28 09:03:11.361998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.475 [2024-09-28 09:03:11.362217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.475 [2024-09-28 09:03:11.362352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.367622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.367842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.367991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.373194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.373404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.373548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.378658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.378891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.379045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.384079] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.384291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.384420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.389671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.389934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.390143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.395310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.395529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.395718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.400768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.401041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.401246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.406935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.407135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.407327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.413444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.413511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.413531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.418605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.418851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.418879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.423793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.423868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.423887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.428507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.428571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.428590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.433974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.434037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.434056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.438781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.438886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.438907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.443734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.443795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.443813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.448385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.448447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.448465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.453240] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.453437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.453462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.458354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.458570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.458700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.476 [2024-09-28 09:03:11.463617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.476 [2024-09-28 09:03:11.463858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.476 [2024-09-28 09:03:11.464022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.469868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.470096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.470298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.475601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.475833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.476030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.480896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.481102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.481277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.486266] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.486486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.486618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.491679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.491919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.492068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.497255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.497472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.497615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.502505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.502568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.502587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.507202] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.507265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.507283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.511981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.512029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.512048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.516572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.516635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.516653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.521477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.521539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.521556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.526260] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.526322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.526341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.530917] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.530977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.530996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.535538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.535599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.535617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.540295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.540356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.540374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.545039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.545104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.545138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.549736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.549798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.549832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.554453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.554652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.554676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.559506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.559568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.559587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.564346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.735 [2024-09-28 09:03:11.564409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.735 [2024-09-28 09:03:11.564427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.735 [2024-09-28 09:03:11.569141] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.569220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.569255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.573882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.573940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.573959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.578511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.578572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.578591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.583310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.583371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.583389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.588059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.588121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.588139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.592654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.592715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.592734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.597634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.597696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.597714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.602458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.602529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.602549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.607282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.607479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.607502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.612268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.612331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.612351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.617204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.617266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.617284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.622023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.622085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.622103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.626676] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.626738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.626757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.631479] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.631541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.631559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.636271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.636333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.636352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.641083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.641162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.641210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.645902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.645947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.645981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.650580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.650642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.650660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.655456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.655519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.655537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.660189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.660387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.660426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.665204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.665281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.665300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.669957] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.670018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.670037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.674631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.674692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.674711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.679464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.679668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.679692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.684408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.684470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.684489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.689039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.689088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.689107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.693773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.693864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.693884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.698641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.698704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.698722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.703476] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.703676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.703700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.708441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.708504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.708523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.713228] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.713289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.713307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.717940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.718000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.718019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.722715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.722777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.722797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.736 [2024-09-28 09:03:11.727974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.736 [2024-09-28 09:03:11.728040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.736 [2024-09-28 09:03:11.728075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.733302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.733383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.733402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.738290] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.738353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.738373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.743109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.743170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.743204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.747859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.747921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.747940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.752557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.752619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.752638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.757470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.757532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.757566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.762213] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.762418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.762442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.767508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.767723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.767870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.772696] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.772934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.773072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.777895] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.778110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.778241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.783183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.783404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.783630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.788326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.788541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.788672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.793742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.793989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.794124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.799042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.799238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.799366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.804283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.804474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.804661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.809686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.809922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.810057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.814942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.815147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.815277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.820094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.820306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.820331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.825254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.825316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.825334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.829998] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.830058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.830076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.834721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.834783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.834801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.839405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.839467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.839487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.844178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.844239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.844257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.997 [2024-09-28 09:03:11.848946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.997 [2024-09-28 09:03:11.849010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.997 [2024-09-28 09:03:11.849029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.853787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.853878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.853899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.858802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.858875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.858895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.863493] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.863555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.863573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.868249] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.868452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.868476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.873353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.873429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.873448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.878113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.878176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.878195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.882822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.882897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.882916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.887576] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.887639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.887657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.892648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.892712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.892731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.897728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.897792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.897810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.902771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.902856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.902876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.907967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.908017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.908036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.912781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.912867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.912887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.917759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.917851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.917871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.998 [2024-09-28 09:03:11.922719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:33.998 [2024-09-28 09:03:11.922781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.998 [2024-09-28 09:03:11.922799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.998 5921.00 IOPS, 740.12 MiB/s 00:25:33.998 Latency(us) 00:25:33.998 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.998 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:33.998 nvme0n1 : 2.00 5921.19 740.15 0.00 0.00 2698.43 2070.34 6851.49 00:25:33.998 =================================================================================================================== 00:25:33.998 Total : 5921.19 740.15 0.00 0.00 2698.43 2070.34 6851.49 00:25:33.998 { 00:25:33.998 "results": [ 00:25:33.998 { 00:25:33.998 "job": "nvme0n1", 00:25:33.998 "core_mask": "0x2", 00:25:33.998 "workload": "randread", 00:25:33.998 "status": "finished", 00:25:33.998 "queue_depth": 16, 00:25:33.998 "io_size": 131072, 00:25:33.998 "runtime": 2.002637, 00:25:33.998 "iops": 5921.19290715192, 00:25:33.998 "mibps": 740.14911339399, 00:25:33.998 "io_failed": 0, 00:25:33.998 "io_timeout": 0, 00:25:33.998 "avg_latency_us": 2698.4255066775017, 00:25:33.998 "min_latency_us": 2070.3418181818183, 00:25:33.998 "max_latency_us": 6851.490909090909 00:25:33.998 } 00:25:33.998 ], 00:25:33.998 "core_count": 1 00:25:33.998 } 00:25:33.998 09:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:33.998 09:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:33.998 09:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:33.998 09:03:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:33.998 | .driver_specific 00:25:33.998 | .nvme_error 00:25:33.998 | .status_code 00:25:33.998 | .command_transient_transport_error' 00:25:34.256 09:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 382 > 0 )) 00:25:34.256 09:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86454 00:25:34.256 09:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86454 ']' 00:25:34.256 09:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86454 00:25:34.256 09:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:34.256 09:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:34.256 09:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86454 00:25:34.256 killing process with pid 86454 00:25:34.256 Received shutdown signal, test time was about 2.000000 seconds 00:25:34.256 00:25:34.256 Latency(us) 00:25:34.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.256 =================================================================================================================== 00:25:34.256 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:34.256 09:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:34.257 09:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:34.257 09:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86454' 00:25:34.257 09:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86454 00:25:34.257 09:03:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86454 00:25:35.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:35.630 09:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:35.630 09:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:35.630 09:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:35.630 09:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:35.630 09:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:35.630 09:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86523 00:25:35.630 09:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86523 /var/tmp/bperf.sock 00:25:35.630 09:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:35.630 09:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86523 ']' 00:25:35.630 09:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:35.630 09:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:35.630 09:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:35.630 09:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:35.630 09:03:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:35.630 [2024-09-28 09:03:13.301684] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:35.630 [2024-09-28 09:03:13.302186] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86523 ] 00:25:35.630 [2024-09-28 09:03:13.471872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.630 [2024-09-28 09:03:13.619999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.889 [2024-09-28 09:03:13.772319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:36.455 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:36.455 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:36.455 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:36.455 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:36.455 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:36.455 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.455 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:36.455 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.455 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.455 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:37.022 nvme0n1 00:25:37.022 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:37.022 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.022 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:37.022 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.022 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:37.022 09:03:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:37.022 Running I/O for 2 seconds... 00:25:37.022 [2024-09-28 09:03:14.918399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfef90 00:25:37.022 [2024-09-28 09:03:14.919980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.022 [2024-09-28 09:03:14.920040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:37.022 [2024-09-28 09:03:14.942250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe720 00:25:37.022 [2024-09-28 09:03:14.944989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.022 [2024-09-28 09:03:14.945223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.022 [2024-09-28 09:03:14.958954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfef90 00:25:37.022 [2024-09-28 09:03:14.961655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.022 [2024-09-28 09:03:14.961720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:37.022 [2024-09-28 09:03:14.975319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfe2e8 00:25:37.022 [2024-09-28 09:03:14.978001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.022 [2024-09-28 09:03:14.978064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:37.022 [2024-09-28 09:03:14.992776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfda78 00:25:37.022 [2024-09-28 09:03:14.995667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.022 [2024-09-28 09:03:14.995726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:37.022 [2024-09-28 09:03:15.012176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfd208 00:25:37.022 [2024-09-28 09:03:15.015494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.022 [2024-09-28 09:03:15.015558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:37.281 [2024-09-28 09:03:15.031530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfc998 00:25:37.281 [2024-09-28 09:03:15.034679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.281 [2024-09-28 09:03:15.034768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:37.281 [2024-09-28 09:03:15.050409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfc128 00:25:37.281 [2024-09-28 09:03:15.053477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.281 [2024-09-28 09:03:15.053687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:37.281 [2024-09-28 09:03:15.070771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfb8b8 00:25:37.281 [2024-09-28 09:03:15.073805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.281 [2024-09-28 09:03:15.073918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:37.281 [2024-09-28 09:03:15.089471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfb048 00:25:37.281 [2024-09-28 09:03:15.092405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.281 [2024-09-28 09:03:15.092449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:37.281 [2024-09-28 09:03:15.107097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dfa7d8 00:25:37.281 [2024-09-28 09:03:15.109738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.281 [2024-09-28 09:03:15.109797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:37.281 [2024-09-28 09:03:15.124053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df9f68 00:25:37.281 [2024-09-28 09:03:15.126891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.281 [2024-09-28 09:03:15.126963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:37.281 [2024-09-28 09:03:15.141569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df96f8 00:25:37.281 [2024-09-28 09:03:15.144186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.281 [2024-09-28 09:03:15.144250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:37.281 [2024-09-28 09:03:15.158760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df8e88 00:25:37.281 [2024-09-28 09:03:15.161498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.281 [2024-09-28 09:03:15.161727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:37.281 [2024-09-28 09:03:15.176077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df8618 00:25:37.281 [2024-09-28 09:03:15.178577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.281 [2024-09-28 09:03:15.178636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:37.281 [2024-09-28 09:03:15.193335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df7da8 00:25:37.281 [2024-09-28 09:03:15.195923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.281 [2024-09-28 09:03:15.195978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:37.281 [2024-09-28 09:03:15.210652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df7538 00:25:37.281 [2024-09-28 09:03:15.213342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.281 [2024-09-28 09:03:15.213541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:37.281 [2024-09-28 09:03:15.227996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df6cc8 00:25:37.281 [2024-09-28 09:03:15.230644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.281 [2024-09-28 09:03:15.230710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:37.282 [2024-09-28 09:03:15.245168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df6458 00:25:37.282 [2024-09-28 09:03:15.247871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.282 [2024-09-28 09:03:15.247937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:37.282 [2024-09-28 09:03:15.262770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df5be8 00:25:37.282 [2024-09-28 09:03:15.265337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.282 [2024-09-28 09:03:15.265543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:37.540 [2024-09-28 09:03:15.281674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df5378 00:25:37.540 [2024-09-28 09:03:15.284400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.540 [2024-09-28 09:03:15.284459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:37.540 [2024-09-28 09:03:15.298711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df4b08 00:25:37.540 [2024-09-28 09:03:15.301253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.540 [2024-09-28 09:03:15.301310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:37.540 [2024-09-28 09:03:15.315252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df4298 00:25:37.540 [2024-09-28 09:03:15.317675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.540 [2024-09-28 09:03:15.317732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:37.540 [2024-09-28 09:03:15.331664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df3a28 00:25:37.540 [2024-09-28 09:03:15.334052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.540 [2024-09-28 09:03:15.334116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:37.540 [2024-09-28 09:03:15.347815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df31b8 00:25:37.540 [2024-09-28 09:03:15.350057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.540 [2024-09-28 09:03:15.350124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:37.540 [2024-09-28 09:03:15.363885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df2948 00:25:37.540 [2024-09-28 09:03:15.366170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.540 [2024-09-28 09:03:15.366228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:37.540 [2024-09-28 09:03:15.379947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df20d8 00:25:37.540 [2024-09-28 09:03:15.382131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.540 [2024-09-28 09:03:15.382173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:37.540 [2024-09-28 09:03:15.395917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df1868 00:25:37.540 [2024-09-28 09:03:15.398204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.540 [2024-09-28 09:03:15.398268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:37.540 [2024-09-28 09:03:15.412335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df0ff8 00:25:37.540 [2024-09-28 09:03:15.414535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.541 [2024-09-28 09:03:15.414597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:37.541 [2024-09-28 09:03:15.428567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df0788 00:25:37.541 [2024-09-28 09:03:15.430782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.541 [2024-09-28 09:03:15.430864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:37.541 [2024-09-28 09:03:15.444763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deff18 00:25:37.541 [2024-09-28 09:03:15.446956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.541 [2024-09-28 09:03:15.446997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:37.541 [2024-09-28 09:03:15.460903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019def6a8 00:25:37.541 [2024-09-28 09:03:15.463024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.541 [2024-09-28 09:03:15.463089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:37.541 [2024-09-28 09:03:15.477021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deee38 00:25:37.541 [2024-09-28 09:03:15.479070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.541 [2024-09-28 09:03:15.479131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:37.541 [2024-09-28 09:03:15.493123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dee5c8 00:25:37.541 [2024-09-28 09:03:15.495152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.541 [2024-09-28 09:03:15.495194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:37.541 [2024-09-28 09:03:15.509307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dedd58 00:25:37.541 [2024-09-28 09:03:15.511377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.541 [2024-09-28 09:03:15.511433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:37.541 [2024-09-28 09:03:15.525723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019ded4e8 00:25:37.541 [2024-09-28 09:03:15.527785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.541 [2024-09-28 09:03:15.527857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:37.799 [2024-09-28 09:03:15.543088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019decc78 00:25:37.799 [2024-09-28 09:03:15.545516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.799 [2024-09-28 09:03:15.545746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:37.799 [2024-09-28 09:03:15.560159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dec408 00:25:37.799 [2024-09-28 09:03:15.562209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.799 [2024-09-28 09:03:15.562273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:37.799 [2024-09-28 09:03:15.576308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019debb98 00:25:37.799 [2024-09-28 09:03:15.578321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.799 [2024-09-28 09:03:15.578379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:37.799 [2024-09-28 09:03:15.592765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deb328 00:25:37.799 [2024-09-28 09:03:15.594818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.799 [2024-09-28 09:03:15.594875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:37.799 [2024-09-28 09:03:15.609377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deaab8 00:25:37.799 [2024-09-28 09:03:15.611459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.799 [2024-09-28 09:03:15.611519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:37.799 [2024-09-28 09:03:15.626001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dea248 00:25:37.799 [2024-09-28 09:03:15.627869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.799 [2024-09-28 09:03:15.627931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:37.799 [2024-09-28 09:03:15.642181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de99d8 00:25:37.799 [2024-09-28 09:03:15.644037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.799 [2024-09-28 09:03:15.644095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:37.799 [2024-09-28 09:03:15.658464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de9168 00:25:37.800 [2024-09-28 09:03:15.660371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.800 [2024-09-28 09:03:15.660428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:37.800 [2024-09-28 09:03:15.674857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de88f8 00:25:37.800 [2024-09-28 09:03:15.677099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.800 [2024-09-28 09:03:15.677359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:37.800 [2024-09-28 09:03:15.691593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de8088 00:25:37.800 [2024-09-28 09:03:15.693682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.800 [2024-09-28 09:03:15.693965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:37.800 [2024-09-28 09:03:15.708362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de7818 00:25:37.800 [2024-09-28 09:03:15.710388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.800 [2024-09-28 09:03:15.710626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:37.800 [2024-09-28 09:03:15.725343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de6fa8 00:25:37.800 [2024-09-28 09:03:15.727332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.800 [2024-09-28 09:03:15.727560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:37.800 [2024-09-28 09:03:15.742373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de6738 00:25:37.800 [2024-09-28 09:03:15.744335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.800 [2024-09-28 09:03:15.744562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:37.800 [2024-09-28 09:03:15.759094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de5ec8 00:25:37.800 [2024-09-28 09:03:15.761003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.800 [2024-09-28 09:03:15.761239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:37.800 [2024-09-28 09:03:15.775675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de5658 00:25:37.800 [2024-09-28 09:03:15.777658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.800 [2024-09-28 09:03:15.777913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:37.800 [2024-09-28 09:03:15.793039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de4de8 00:25:38.057 [2024-09-28 09:03:15.795187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.057 [2024-09-28 09:03:15.795448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:38.057 [2024-09-28 09:03:15.810609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de4578 00:25:38.057 [2024-09-28 09:03:15.812488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.057 [2024-09-28 09:03:15.812744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:38.057 [2024-09-28 09:03:15.827547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de3d08 00:25:38.057 [2024-09-28 09:03:15.829468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.057 [2024-09-28 09:03:15.829698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:38.057 [2024-09-28 09:03:15.844540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de3498 00:25:38.057 [2024-09-28 09:03:15.846340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.057 [2024-09-28 09:03:15.846580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:38.057 [2024-09-28 09:03:15.861167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de2c28 00:25:38.057 [2024-09-28 09:03:15.862886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.057 [2024-09-28 09:03:15.862983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:38.057 [2024-09-28 09:03:15.877522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de23b8 00:25:38.057 [2024-09-28 09:03:15.879155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.057 [2024-09-28 09:03:15.879220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:38.057 [2024-09-28 09:03:15.894137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de1b48 00:25:38.057 [2024-09-28 09:03:15.895709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.057 [2024-09-28 09:03:15.895771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:38.057 14804.00 IOPS, 57.83 MiB/s [2024-09-28 09:03:15.910649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de12d8 00:25:38.057 [2024-09-28 09:03:15.912349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.057 [2024-09-28 09:03:15.912406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:38.057 [2024-09-28 09:03:15.926866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de0a68 00:25:38.057 [2024-09-28 09:03:15.928423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.057 [2024-09-28 09:03:15.928480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:38.058 [2024-09-28 09:03:15.943498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de01f8 00:25:38.058 [2024-09-28 09:03:15.945085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.058 [2024-09-28 09:03:15.945138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:38.058 [2024-09-28 09:03:15.959649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019ddf988 00:25:38.058 [2024-09-28 09:03:15.961344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.058 [2024-09-28 09:03:15.961404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:38.058 [2024-09-28 09:03:15.976106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019ddf118 00:25:38.058 [2024-09-28 09:03:15.977610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.058 [2024-09-28 09:03:15.977835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:38.058 [2024-09-28 09:03:15.992546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dde8a8 00:25:38.058 [2024-09-28 09:03:15.994150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.058 [2024-09-28 09:03:15.994209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:38.058 [2024-09-28 09:03:16.008779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dde038 00:25:38.058 [2024-09-28 09:03:16.010300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.058 [2024-09-28 09:03:16.010366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:38.058 [2024-09-28 09:03:16.031510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dde038 00:25:38.058 [2024-09-28 09:03:16.034540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.058 [2024-09-28 09:03:16.034598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:38.058 [2024-09-28 09:03:16.047994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dde8a8 00:25:38.058 [2024-09-28 09:03:16.050954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.058 [2024-09-28 09:03:16.051054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.065507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019ddf118 00:25:38.316 [2024-09-28 09:03:16.068565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.068631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.084947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019ddf988 00:25:38.316 [2024-09-28 09:03:16.088184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.088267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.103086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de01f8 00:25:38.316 [2024-09-28 09:03:16.105858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.105936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.119818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de0a68 00:25:38.316 [2024-09-28 09:03:16.122476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.122533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.136362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de12d8 00:25:38.316 [2024-09-28 09:03:16.139035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.139093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.152537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de1b48 00:25:38.316 [2024-09-28 09:03:16.155230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.155295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.168912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de23b8 00:25:38.316 [2024-09-28 09:03:16.171758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.171833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.185470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de2c28 00:25:38.316 [2024-09-28 09:03:16.187978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.188041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.201639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de3498 00:25:38.316 [2024-09-28 09:03:16.204238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.204295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.218035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de3d08 00:25:38.316 [2024-09-28 09:03:16.220442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.220500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.234345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de4578 00:25:38.316 [2024-09-28 09:03:16.236919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.236982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.250726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de4de8 00:25:38.316 [2024-09-28 09:03:16.253257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.253477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.267421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de5658 00:25:38.316 [2024-09-28 09:03:16.269886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.269949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.284831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de5ec8 00:25:38.316 [2024-09-28 09:03:16.287565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.287625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:38.316 [2024-09-28 09:03:16.303536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de6738 00:25:38.316 [2024-09-28 09:03:16.306351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.316 [2024-09-28 09:03:16.306410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:38.575 [2024-09-28 09:03:16.323121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de6fa8 00:25:38.575 [2024-09-28 09:03:16.325674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.575 [2024-09-28 09:03:16.325909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:38.575 [2024-09-28 09:03:16.341302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de7818 00:25:38.575 [2024-09-28 09:03:16.343810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.575 [2024-09-28 09:03:16.343849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:38.575 [2024-09-28 09:03:16.358908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de8088 00:25:38.575 [2024-09-28 09:03:16.361357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.575 [2024-09-28 09:03:16.361560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:38.575 [2024-09-28 09:03:16.376158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de88f8 00:25:38.575 [2024-09-28 09:03:16.378895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.575 [2024-09-28 09:03:16.379121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:38.575 [2024-09-28 09:03:16.394146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de9168 00:25:38.575 [2024-09-28 09:03:16.396583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.575 [2024-09-28 09:03:16.396862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:38.575 [2024-09-28 09:03:16.412013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019de99d8 00:25:38.575 [2024-09-28 09:03:16.414626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.575 [2024-09-28 09:03:16.414860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:38.575 [2024-09-28 09:03:16.429937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dea248 00:25:38.575 [2024-09-28 09:03:16.432414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.575 [2024-09-28 09:03:16.432634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:38.575 [2024-09-28 09:03:16.448004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deaab8 00:25:38.575 [2024-09-28 09:03:16.450507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.575 [2024-09-28 09:03:16.450727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:38.575 [2024-09-28 09:03:16.465729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deb328 00:25:38.575 [2024-09-28 09:03:16.468468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.575 [2024-09-28 09:03:16.468686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:38.575 [2024-09-28 09:03:16.483700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019debb98 00:25:38.575 [2024-09-28 09:03:16.486260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.575 [2024-09-28 09:03:16.486477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:38.575 [2024-09-28 09:03:16.501780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dec408 00:25:38.575 [2024-09-28 09:03:16.504136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.575 [2024-09-28 09:03:16.504343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:38.575 [2024-09-28 09:03:16.519564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019decc78 00:25:38.575 [2024-09-28 09:03:16.521884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.575 [2024-09-28 09:03:16.521949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:38.575 [2024-09-28 09:03:16.537023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019ded4e8 00:25:38.575 [2024-09-28 09:03:16.539512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.575 [2024-09-28 09:03:16.539575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:38.575 [2024-09-28 09:03:16.554337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dedd58 00:25:38.576 [2024-09-28 09:03:16.556478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.576 [2024-09-28 09:03:16.556536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:38.834 [2024-09-28 09:03:16.571874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019dee5c8 00:25:38.834 [2024-09-28 09:03:16.574433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.834 [2024-09-28 09:03:16.574493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:38.834 [2024-09-28 09:03:16.588760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deee38 00:25:38.834 [2024-09-28 09:03:16.590987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.834 [2024-09-28 09:03:16.591187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:38.834 [2024-09-28 09:03:16.605691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019def6a8 00:25:38.834 [2024-09-28 09:03:16.607794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.834 [2024-09-28 09:03:16.607885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:38.834 [2024-09-28 09:03:16.622306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019deff18 00:25:38.834 [2024-09-28 09:03:16.624285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.835 [2024-09-28 09:03:16.624354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:38.835 [2024-09-28 09:03:16.638603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df0788 00:25:38.835 [2024-09-28 09:03:16.640679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.835 [2024-09-28 09:03:16.640742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:38.835 [2024-09-28 09:03:16.655042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df0ff8 00:25:38.835 [2024-09-28 09:03:16.657066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.835 [2024-09-28 09:03:16.657128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:38.835 [2024-09-28 09:03:16.671346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df1868 00:25:38.835 [2024-09-28 09:03:16.673377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.835 [2024-09-28 09:03:16.673434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:38.835 [2024-09-28 09:03:16.687604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df20d8 00:25:38.835 [2024-09-28 09:03:16.689700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.835 [2024-09-28 09:03:16.689933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:38.835 [2024-09-28 09:03:16.704216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df2948 00:25:38.835 [2024-09-28 09:03:16.706275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.835 [2024-09-28 09:03:16.706338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:38.835 [2024-09-28 09:03:16.720570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df31b8 00:25:38.835 [2024-09-28 09:03:16.722593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.835 [2024-09-28 09:03:16.722656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:38.835 [2024-09-28 09:03:16.737135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df3a28 00:25:38.835 [2024-09-28 09:03:16.739388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.835 [2024-09-28 09:03:16.739444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:38.835 [2024-09-28 09:03:16.753732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df4298 00:25:38.835 [2024-09-28 09:03:16.755753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.835 [2024-09-28 09:03:16.755810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:38.835 [2024-09-28 09:03:16.770284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df4b08 00:25:38.835 [2024-09-28 09:03:16.772129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.835 [2024-09-28 09:03:16.772196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:38.835 [2024-09-28 09:03:16.786538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df5378 00:25:38.835 [2024-09-28 09:03:16.788418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.835 [2024-09-28 09:03:16.788479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:38.835 [2024-09-28 09:03:16.802846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df5be8 00:25:38.835 [2024-09-28 09:03:16.804981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.835 [2024-09-28 09:03:16.805237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:38.835 [2024-09-28 09:03:16.819660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df6458 00:25:38.835 [2024-09-28 09:03:16.821656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:38.835 [2024-09-28 09:03:16.821901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:39.093 [2024-09-28 09:03:16.837828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df6cc8 00:25:39.093 [2024-09-28 09:03:16.839757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.093 [2024-09-28 09:03:16.840004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:39.093 [2024-09-28 09:03:16.854696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df7538 00:25:39.093 [2024-09-28 09:03:16.856631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.093 [2024-09-28 09:03:16.856928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:39.093 [2024-09-28 09:03:16.871423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df7da8 00:25:39.093 [2024-09-28 09:03:16.873469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.093 [2024-09-28 09:03:16.873710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:39.094 [2024-09-28 09:03:16.888173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df8618 00:25:39.094 [2024-09-28 09:03:16.890108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.094 [2024-09-28 09:03:16.890354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:39.094 14866.00 IOPS, 58.07 MiB/s [2024-09-28 09:03:16.905051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200019df8e88 00:25:39.094 [2024-09-28 09:03:16.906960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:39.094 [2024-09-28 09:03:16.907158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:39.094 00:25:39.094 Latency(us) 00:25:39.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.094 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:39.094 nvme0n1 : 2.01 14884.07 58.14 0.00 0.00 8591.99 3038.49 31695.59 00:25:39.094 =================================================================================================================== 00:25:39.094 Total : 14884.07 58.14 0.00 0.00 8591.99 3038.49 31695.59 00:25:39.094 { 00:25:39.094 "results": [ 00:25:39.094 { 00:25:39.094 "job": "nvme0n1", 00:25:39.094 "core_mask": "0x2", 00:25:39.094 "workload": "randwrite", 00:25:39.094 "status": "finished", 00:25:39.094 "queue_depth": 128, 00:25:39.094 "io_size": 4096, 00:25:39.094 "runtime": 2.006172, 00:25:39.094 "iops": 14884.067766871434, 00:25:39.094 "mibps": 58.14088971434154, 00:25:39.094 "io_failed": 0, 00:25:39.094 "io_timeout": 0, 00:25:39.094 "avg_latency_us": 8591.985635754734, 00:25:39.094 "min_latency_us": 3038.4872727272727, 00:25:39.094 "max_latency_us": 31695.592727272728 00:25:39.094 } 00:25:39.094 ], 00:25:39.094 "core_count": 1 00:25:39.094 } 00:25:39.094 09:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:39.094 09:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:39.094 09:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:39.094 | .driver_specific 00:25:39.094 | .nvme_error 00:25:39.094 | .status_code 00:25:39.094 | .command_transient_transport_error' 00:25:39.094 09:03:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:39.352 09:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 117 > 0 )) 00:25:39.352 09:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86523 00:25:39.352 09:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86523 ']' 00:25:39.352 09:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86523 00:25:39.352 09:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:39.352 09:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:39.352 09:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86523 00:25:39.352 killing process with pid 86523 00:25:39.352 Received shutdown signal, test time was about 2.000000 seconds 00:25:39.352 00:25:39.352 Latency(us) 00:25:39.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.352 =================================================================================================================== 00:25:39.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.352 09:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:39.352 09:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:39.352 09:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86523' 00:25:39.352 09:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86523 00:25:39.352 09:03:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86523 00:25:40.288 09:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:40.288 09:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:40.288 09:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:40.288 09:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:40.288 09:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:40.288 09:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:40.288 09:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86590 00:25:40.288 09:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86590 /var/tmp/bperf.sock 00:25:40.288 09:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 86590 ']' 00:25:40.288 09:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:40.288 09:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:40.288 09:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:40.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:40.288 09:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:40.288 09:03:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:40.288 [2024-09-28 09:03:18.237460] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:40.288 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:40.288 Zero copy mechanism will not be used. 00:25:40.288 [2024-09-28 09:03:18.238133] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86590 ] 00:25:40.546 [2024-09-28 09:03:18.407140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.805 [2024-09-28 09:03:18.557190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.805 [2024-09-28 09:03:18.703964] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:41.372 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:41.372 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:25:41.372 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:41.372 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:41.372 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:41.372 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.372 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:41.372 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.372 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:41.372 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:41.631 nvme0n1 00:25:41.631 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:41.631 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.631 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:41.631 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.631 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:41.631 09:03:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:41.890 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:41.890 Zero copy mechanism will not be used. 00:25:41.890 Running I/O for 2 seconds... 00:25:41.890 [2024-09-28 09:03:19.735766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.890 [2024-09-28 09:03:19.736219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.890 [2024-09-28 09:03:19.736262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.890 [2024-09-28 09:03:19.742022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.890 [2024-09-28 09:03:19.742332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.890 [2024-09-28 09:03:19.742385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.890 [2024-09-28 09:03:19.747771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.890 [2024-09-28 09:03:19.748168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.890 [2024-09-28 09:03:19.748242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.890 [2024-09-28 09:03:19.753725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.890 [2024-09-28 09:03:19.754314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.890 [2024-09-28 09:03:19.754364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.890 [2024-09-28 09:03:19.760007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.890 [2024-09-28 09:03:19.760338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.890 [2024-09-28 09:03:19.760373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.890 [2024-09-28 09:03:19.765945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.890 [2024-09-28 09:03:19.766268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.890 [2024-09-28 09:03:19.766303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.890 [2024-09-28 09:03:19.771761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.890 [2024-09-28 09:03:19.772146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.890 [2024-09-28 09:03:19.772229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.890 [2024-09-28 09:03:19.777886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.778199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.778242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.783927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.784269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.784303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.790152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.790473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.790514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.795979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.796291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.796335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.801860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.802184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.802219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.807764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.808171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.808228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.813996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.814308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.814353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.820000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.820318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.820362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.825974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.826318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.826353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.831930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.832251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.832294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.837943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.838264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.838309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.843844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.844164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.844199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.849809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.850367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.850423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.856158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.856471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.856514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.862286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.862627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.862662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.868281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.868598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.868633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.874343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.874653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.874701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.891 [2024-09-28 09:03:19.880285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:41.891 [2024-09-28 09:03:19.880704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.891 [2024-09-28 09:03:19.880835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.886842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.887257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.887315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.893165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.893508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.893553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.899049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.899364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.899406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.904943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.905359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.905395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.910897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.911211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.911253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.916710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.917347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.917396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.922890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.923209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.923244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.928683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.929300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.929341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.934773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.935154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.935251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.940644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.941287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.941337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.946950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.947282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.947317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.953025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.953407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.953450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.958896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.959211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.959254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.964765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.965414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.965455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.971028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.971351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.971385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.976939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.977276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.977318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.982757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.983149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.983207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.988650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.989295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.989350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:19.994757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:19.995145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:19.995211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:20.000622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:20.001269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:20.001318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.152 [2024-09-28 09:03:20.007382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.152 [2024-09-28 09:03:20.007727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.152 [2024-09-28 09:03:20.007765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.013916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.014247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.014291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.020114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.020657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.020710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.027741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.028369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.028414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.034340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.034660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.034705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.040349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.040661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.040704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.046340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.046664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.046699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.052265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.052578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.052621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.058277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.058588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.058631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.064180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.064505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.064540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.070224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.070545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.070580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.076059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.076372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.076414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.082227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.082584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.082620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.088214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.088544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.088579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.094438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.094770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.094825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.100272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.100586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.100633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.106201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.106519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.106554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.111990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.112302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.112341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.118094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.118446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.118490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.124366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.124694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.124730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.130642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.131093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.131159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.137496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.137852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.137913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.153 [2024-09-28 09:03:20.144683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.153 [2024-09-28 09:03:20.145120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.153 [2024-09-28 09:03:20.145211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.413 [2024-09-28 09:03:20.151729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.413 [2024-09-28 09:03:20.152345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.413 [2024-09-28 09:03:20.152401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.158527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.158876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.158921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.164891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.165288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.165340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.171504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.172035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.172078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.178531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.178923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.178973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.185794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.186245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.186293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.192613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.193005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.193056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.199476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.200030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.200087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.206283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.206611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.206647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.212386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.212755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.212814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.218661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.219018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.219065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.225028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.225405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.225441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.231117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.231447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.231482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.237262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.237582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.237627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.243413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.243726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.243763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.249639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.250018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.250059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.255728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.256102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.256156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.261914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.262259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.262302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.268207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.268545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.268581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.274344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.274665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.274708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.280491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.280870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.280916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.286651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.287018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.287053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.293086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.293476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.293510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.299204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.299521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.299565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.305411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.305737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.305774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.311485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.311813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.311883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.317890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.318259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.414 [2024-09-28 09:03:20.318316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.414 [2024-09-28 09:03:20.324017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.414 [2024-09-28 09:03:20.324345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.415 [2024-09-28 09:03:20.324380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.415 [2024-09-28 09:03:20.330142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.415 [2024-09-28 09:03:20.330494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.415 [2024-09-28 09:03:20.330530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.415 [2024-09-28 09:03:20.336598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.415 [2024-09-28 09:03:20.337001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.415 [2024-09-28 09:03:20.337048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.415 [2024-09-28 09:03:20.342837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.415 [2024-09-28 09:03:20.343162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.415 [2024-09-28 09:03:20.343206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.415 [2024-09-28 09:03:20.348772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.415 [2024-09-28 09:03:20.349286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.415 [2024-09-28 09:03:20.349352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.415 [2024-09-28 09:03:20.355017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.415 [2024-09-28 09:03:20.355339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.415 [2024-09-28 09:03:20.355383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.415 [2024-09-28 09:03:20.361416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.415 [2024-09-28 09:03:20.361736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.415 [2024-09-28 09:03:20.361780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.415 [2024-09-28 09:03:20.367551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.415 [2024-09-28 09:03:20.367892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.415 [2024-09-28 09:03:20.367927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.415 [2024-09-28 09:03:20.373556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.415 [2024-09-28 09:03:20.373940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.415 [2024-09-28 09:03:20.373976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.415 [2024-09-28 09:03:20.379620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.415 [2024-09-28 09:03:20.380044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.415 [2024-09-28 09:03:20.380099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.415 [2024-09-28 09:03:20.385984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.415 [2024-09-28 09:03:20.386344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.415 [2024-09-28 09:03:20.386378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.415 [2024-09-28 09:03:20.392075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.415 [2024-09-28 09:03:20.392414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.415 [2024-09-28 09:03:20.392448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.415 [2024-09-28 09:03:20.398178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.415 [2024-09-28 09:03:20.398515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.415 [2024-09-28 09:03:20.398559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.415 [2024-09-28 09:03:20.404372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.415 [2024-09-28 09:03:20.404734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.415 [2024-09-28 09:03:20.404810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.410931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.411252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.411288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.417219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.417530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.417575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.423404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.423725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.423771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.429565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.429934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.429970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.435589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.435934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.435977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.441547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.441909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.441952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.447475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.447797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.447846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.453386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.453704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.453739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.459377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.459693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.459736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.465408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.465728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.465763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.471345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.471663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.471699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.477361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.477674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.477719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.483262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.483576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.483621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.489172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.489511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.489547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.495118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.495427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.495470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.501064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.501411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.501454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.507003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.507341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.507376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.512934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.513294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.513328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.518841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.519216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.519275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.524792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.525181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.525231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.676 [2024-09-28 09:03:20.530643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.676 [2024-09-28 09:03:20.531031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.676 [2024-09-28 09:03:20.531071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.536766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.537402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.537451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.543000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.543337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.543378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.548974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.549330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.549365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.554798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.555204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.555261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.560742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.561388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.561437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.567041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.567400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.567435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.572962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.573324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.573359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.578879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.579207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.579269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.584726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.585373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.585413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.590965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.591306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.591340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.596924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.597276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.597319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.602728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.603116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.603183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.608880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.609245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.609279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.614945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.615293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.615333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.620860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.621217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.621260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.626671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.627063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.627104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.632738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.633395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.633435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.639006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.639338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.639381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.644924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.645283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.645318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.650793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.651228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.651299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.656883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.657236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.657279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.677 [2024-09-28 09:03:20.662867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.677 [2024-09-28 09:03:20.663241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.677 [2024-09-28 09:03:20.663300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.669345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.669727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.669766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.675714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.676144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.676203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.681947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.682265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.682310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.687772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.688107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.688142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.693745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.694133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.694207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.699736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.700121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.700218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.706047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.706369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.706404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.711880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.712198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.712232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.717814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.718183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.718264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.723926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.724243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.724287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.938 5042.00 IOPS, 630.25 MiB/s [2024-09-28 09:03:20.730995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.731441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.731483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.737294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.737608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.737645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.743382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.743902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.743972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.749616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.749972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.750015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.755615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.756173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.756229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.761796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.762159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.762248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.767885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.768195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.768229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.773923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.774233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.774268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.779762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.780325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.780367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.785947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.786258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.786293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.791760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.938 [2024-09-28 09:03:20.792314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.938 [2024-09-28 09:03:20.792386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.938 [2024-09-28 09:03:20.798050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.798364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.798398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.803908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.804221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.804255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.810038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.810357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.810391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.816035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.816349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.816384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.822078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.822401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.822437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.827992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.828302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.828337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.833948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.834274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.834309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.839882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.840207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.840243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.845729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.846094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.846134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.851766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.852324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.852364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.857998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.858347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.858381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.864086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.864401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.864435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.870263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.870582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.870618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.876248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.876561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.876596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.882235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.882549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.882585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.888196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.888508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.888544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.894117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.894430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.894465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.900048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.900370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.900404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.906211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.906524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.906560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.912052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.912362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.912396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.918168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.918487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.918523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.924046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.924356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.924390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.939 [2024-09-28 09:03:20.930617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:42.939 [2024-09-28 09:03:20.930988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.939 [2024-09-28 09:03:20.931027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.199 [2024-09-28 09:03:20.937262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:20.937603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:20.937642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:20.943271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:20.943761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:20.943832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:20.949430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:20.949744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:20.949780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:20.955317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:20.955630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:20.955666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:20.961320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:20.961631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:20.961667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:20.967500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:20.967831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:20.967880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:20.973452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:20.973768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:20.973815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:20.979376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:20.979690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:20.979725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:20.985433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:20.985744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:20.985779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:20.991369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:20.991680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:20.991714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:20.997297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:20.997619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:20.997654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:21.003157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:21.003471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:21.003506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:21.009068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:21.009413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:21.009447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:21.015040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:21.015371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:21.015407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:21.021148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:21.021477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:21.021512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:21.026993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:21.027302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:21.027337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:21.033164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:21.033495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:21.033530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:21.039042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:21.039353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:21.039388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:21.045037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:21.045395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.200 [2024-09-28 09:03:21.045430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.200 [2024-09-28 09:03:21.050846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.200 [2024-09-28 09:03:21.051160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.201 [2024-09-28 09:03:21.051195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.201 [2024-09-28 09:03:21.056629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.201 [2024-09-28 09:03:21.057041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.201 [2024-09-28 09:03:21.057083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.201 [2024-09-28 09:03:21.062915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.201 [2024-09-28 09:03:21.063239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.201 [2024-09-28 09:03:21.063275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.201 [2024-09-28 09:03:21.068996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.201 [2024-09-28 09:03:21.069341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.201 [2024-09-28 09:03:21.069376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.201 [2024-09-28 09:03:21.074744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.201 [2024-09-28 09:03:21.075270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.201 [2024-09-28 09:03:21.075340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.201 [2024-09-28 09:03:21.080899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.201 [2024-09-28 09:03:21.081256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.201 [2024-09-28 09:03:21.081291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.201 [2024-09-28 09:03:21.086812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.201 [2024-09-28 09:03:21.087125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.201 [2024-09-28 09:03:21.087160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.201 [2024-09-28 09:03:21.092644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.201 [2024-09-28 09:03:21.093060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.201 [2024-09-28 09:03:21.093102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.201 [2024-09-28 09:03:21.098714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.201 [2024-09-28 09:03:21.099255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.201 [2024-09-28 09:03:21.099309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.201 [2024-09-28 09:03:21.104841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.201 [2024-09-28 09:03:21.105195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.202 [2024-09-28 09:03:21.105229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.202 [2024-09-28 09:03:21.110708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.202 [2024-09-28 09:03:21.111249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.202 [2024-09-28 09:03:21.111305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.202 [2024-09-28 09:03:21.116972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.202 [2024-09-28 09:03:21.117334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.202 [2024-09-28 09:03:21.117400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.202 [2024-09-28 09:03:21.122935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.202 [2024-09-28 09:03:21.123246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.202 [2024-09-28 09:03:21.123280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.202 [2024-09-28 09:03:21.128692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.202 [2024-09-28 09:03:21.129111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.202 [2024-09-28 09:03:21.129183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.202 [2024-09-28 09:03:21.134828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.202 [2024-09-28 09:03:21.135142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.202 [2024-09-28 09:03:21.135177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.202 [2024-09-28 09:03:21.140612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.202 [2024-09-28 09:03:21.141020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.202 [2024-09-28 09:03:21.141061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.202 [2024-09-28 09:03:21.146689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.202 [2024-09-28 09:03:21.147233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.202 [2024-09-28 09:03:21.147289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.202 [2024-09-28 09:03:21.152781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.203 [2024-09-28 09:03:21.153193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.203 [2024-09-28 09:03:21.153227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.203 [2024-09-28 09:03:21.158627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.203 [2024-09-28 09:03:21.159190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.203 [2024-09-28 09:03:21.159260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.203 [2024-09-28 09:03:21.164668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.203 [2024-09-28 09:03:21.165068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.203 [2024-09-28 09:03:21.165105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.203 [2024-09-28 09:03:21.170872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.203 [2024-09-28 09:03:21.171184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.203 [2024-09-28 09:03:21.171219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.203 [2024-09-28 09:03:21.176687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.203 [2024-09-28 09:03:21.177111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.203 [2024-09-28 09:03:21.177199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.203 [2024-09-28 09:03:21.182772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.203 [2024-09-28 09:03:21.183328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.203 [2024-09-28 09:03:21.183370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.203 [2024-09-28 09:03:21.189048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.203 [2024-09-28 09:03:21.189505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.203 [2024-09-28 09:03:21.189720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.196335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.196662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.196701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.203388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.203727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.203766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.210516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.211112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.211167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.217551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.217901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.217953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.224010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.224381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.224416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.230496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.231060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.231105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.237328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.237645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.237681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.243410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.243724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.243759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.249307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.249621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.249656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.255457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.255778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.255826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.261556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.261894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.261928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.267469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.267780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.267828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.273426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.273736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.273771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.279483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.279811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.279858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.285508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.285822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.285867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.291613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.291978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.464 [2024-09-28 09:03:21.292019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.464 [2024-09-28 09:03:21.297761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.464 [2024-09-28 09:03:21.298147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.298235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.303763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.304130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.304204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.309888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.310203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.310238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.315732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.316116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.316173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.321786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.322156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.322228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.327739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.328132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.328206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.334033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.334346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.334381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.339987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.340321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.340356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.346072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.346393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.346429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.351949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.352287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.352322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.358096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.358428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.358464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.364050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.364380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.364415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.370087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.370400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.370435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.375940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.376276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.376311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.382027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.382339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.382374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.388495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.388876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.388914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.395072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.395416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.395452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.401802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.402425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.402484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.408921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.409336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.409372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.415555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.415943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.415982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.422107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.422457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.422493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.428453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.428793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.428861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.434910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.435257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.435292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.441034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.441393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.441428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.447312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.447632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.447668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.465 [2024-09-28 09:03:21.453404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.465 [2024-09-28 09:03:21.453788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.465 [2024-09-28 09:03:21.453853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.725 [2024-09-28 09:03:21.460118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.725 [2024-09-28 09:03:21.460437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.725 [2024-09-28 09:03:21.460475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.725 [2024-09-28 09:03:21.466592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.725 [2024-09-28 09:03:21.466983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.725 [2024-09-28 09:03:21.467026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.725 [2024-09-28 09:03:21.473016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.725 [2024-09-28 09:03:21.473389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.725 [2024-09-28 09:03:21.473425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.725 [2024-09-28 09:03:21.479140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.725 [2024-09-28 09:03:21.479632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.725 [2024-09-28 09:03:21.479675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.725 [2024-09-28 09:03:21.485592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.725 [2024-09-28 09:03:21.485950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.725 [2024-09-28 09:03:21.485986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.725 [2024-09-28 09:03:21.491612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.725 [2024-09-28 09:03:21.491962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.725 [2024-09-28 09:03:21.491998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.725 [2024-09-28 09:03:21.497619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.725 [2024-09-28 09:03:21.497972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.725 [2024-09-28 09:03:21.498047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.725 [2024-09-28 09:03:21.503847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.504173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.504208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.509833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.510171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.510206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.515918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.516256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.516291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.521953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.522276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.522311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.528174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.528492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.528528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.534328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.534651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.534686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.540377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.540703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.540739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.546472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.546804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.546853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.552705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.553112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.553155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.558789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.559125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.559162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.564745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.565213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.565259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.570918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.571237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.571272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.577191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.577685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.577727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.583509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.583842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.583877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.589869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.590204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.590238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.595851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.596171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.596206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.602040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.602367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.602402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.608065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.608383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.608419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.614382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.614709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.614746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.620878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.621307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.621483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.627198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.627685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.627876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.633961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.634448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.634630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.640470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.641019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.641307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.647292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.647790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.647999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.653930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.654427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.654700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.660524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.661095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.661351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.667413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.726 [2024-09-28 09:03:21.667980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.726 [2024-09-28 09:03:21.668213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.726 [2024-09-28 09:03:21.674400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.727 [2024-09-28 09:03:21.674929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.727 [2024-09-28 09:03:21.675102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.727 [2024-09-28 09:03:21.680645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.727 [2024-09-28 09:03:21.681029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.727 [2024-09-28 09:03:21.681089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.727 [2024-09-28 09:03:21.686614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.727 [2024-09-28 09:03:21.687165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.727 [2024-09-28 09:03:21.687207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.727 [2024-09-28 09:03:21.692684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.727 [2024-09-28 09:03:21.693070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.727 [2024-09-28 09:03:21.693127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.727 [2024-09-28 09:03:21.698637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.727 [2024-09-28 09:03:21.699179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.727 [2024-09-28 09:03:21.699221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.727 [2024-09-28 09:03:21.704687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.727 [2024-09-28 09:03:21.705143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.727 [2024-09-28 09:03:21.705218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.727 [2024-09-28 09:03:21.710856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.727 [2024-09-28 09:03:21.711174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.727 [2024-09-28 09:03:21.711209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:43.727 [2024-09-28 09:03:21.716772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.727 [2024-09-28 09:03:21.717250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.727 [2024-09-28 09:03:21.717315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.986 [2024-09-28 09:03:21.723230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.986 [2024-09-28 09:03:21.723598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.986 [2024-09-28 09:03:21.723637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.986 5034.00 IOPS, 629.25 MiB/s [2024-09-28 09:03:21.730141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200019dfef90 00:25:43.986 [2024-09-28 09:03:21.730363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:43.986 [2024-09-28 09:03:21.730393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:43.986 00:25:43.986 Latency(us) 00:25:43.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.986 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:43.986 nvme0n1 : 2.00 5032.71 629.09 0.00 0.00 3171.33 2249.08 7804.74 00:25:43.986 =================================================================================================================== 00:25:43.986 Total : 5032.71 629.09 0.00 0.00 3171.33 2249.08 7804.74 00:25:43.986 { 00:25:43.986 "results": [ 00:25:43.986 { 00:25:43.986 "job": "nvme0n1", 00:25:43.986 "core_mask": "0x2", 00:25:43.986 "workload": "randwrite", 00:25:43.986 "status": "finished", 00:25:43.986 "queue_depth": 16, 00:25:43.986 "io_size": 131072, 00:25:43.986 "runtime": 2.003693, 00:25:43.986 "iops": 5032.707106328165, 00:25:43.986 "mibps": 629.0883882910206, 00:25:43.986 "io_failed": 0, 00:25:43.986 "io_timeout": 0, 00:25:43.986 "avg_latency_us": 3171.3277415167145, 00:25:43.986 "min_latency_us": 2249.0763636363636, 00:25:43.986 "max_latency_us": 7804.741818181818 00:25:43.986 } 00:25:43.986 ], 00:25:43.986 "core_count": 1 00:25:43.986 } 00:25:43.986 09:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:43.986 09:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:43.986 09:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:43.986 09:03:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:43.986 | .driver_specific 00:25:43.986 | .nvme_error 00:25:43.986 | .status_code 00:25:43.986 | .command_transient_transport_error' 00:25:44.245 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 325 > 0 )) 00:25:44.245 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86590 00:25:44.245 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86590 ']' 00:25:44.245 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86590 00:25:44.245 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:44.245 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:44.245 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86590 00:25:44.245 killing process with pid 86590 00:25:44.245 Received shutdown signal, test time was about 2.000000 seconds 00:25:44.245 00:25:44.245 Latency(us) 00:25:44.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.245 =================================================================================================================== 00:25:44.245 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:44.245 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:44.245 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:44.245 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86590' 00:25:44.245 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86590 00:25:44.245 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86590 00:25:45.182 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 86355 00:25:45.182 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 86355 ']' 00:25:45.182 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 86355 00:25:45.182 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:25:45.182 09:03:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:45.182 09:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86355 00:25:45.182 killing process with pid 86355 00:25:45.182 09:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:45.182 09:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:45.182 09:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86355' 00:25:45.182 09:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 86355 00:25:45.182 09:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 86355 00:25:46.118 ************************************ 00:25:46.118 END TEST nvmf_digest_error 00:25:46.118 ************************************ 00:25:46.118 00:25:46.118 real 0m21.935s 00:25:46.118 user 0m41.646s 00:25:46.118 sys 0m4.695s 00:25:46.118 09:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:46.118 09:03:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:46.118 09:03:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:46.118 09:03:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:46.118 09:03:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:46.118 09:03:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:46.118 rmmod nvme_tcp 00:25:46.118 rmmod nvme_fabrics 00:25:46.118 rmmod nvme_keyring 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:46.118 Process with pid 86355 is not found 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 86355 ']' 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 86355 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 86355 ']' 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 86355 00:25:46.118 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86355) - No such process 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 86355 is not found' 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:46.118 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:25:46.377 00:25:46.377 real 0m46.065s 00:25:46.377 user 1m26.126s 00:25:46.377 sys 0m9.839s 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:46.377 ************************************ 00:25:46.377 END TEST nvmf_digest 00:25:46.377 ************************************ 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:46.377 09:03:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.637 ************************************ 00:25:46.637 START TEST nvmf_host_multipath 00:25:46.637 ************************************ 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:46.637 * Looking for test storage... 00:25:46.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:46.637 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:46.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.638 --rc genhtml_branch_coverage=1 00:25:46.638 --rc genhtml_function_coverage=1 00:25:46.638 --rc genhtml_legend=1 00:25:46.638 --rc geninfo_all_blocks=1 00:25:46.638 --rc geninfo_unexecuted_blocks=1 00:25:46.638 00:25:46.638 ' 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:46.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.638 --rc genhtml_branch_coverage=1 00:25:46.638 --rc genhtml_function_coverage=1 00:25:46.638 --rc genhtml_legend=1 00:25:46.638 --rc geninfo_all_blocks=1 00:25:46.638 --rc geninfo_unexecuted_blocks=1 00:25:46.638 00:25:46.638 ' 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:46.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.638 --rc genhtml_branch_coverage=1 00:25:46.638 --rc genhtml_function_coverage=1 00:25:46.638 --rc genhtml_legend=1 00:25:46.638 --rc geninfo_all_blocks=1 00:25:46.638 --rc geninfo_unexecuted_blocks=1 00:25:46.638 00:25:46.638 ' 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:46.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:46.638 --rc genhtml_branch_coverage=1 00:25:46.638 --rc genhtml_function_coverage=1 00:25:46.638 --rc genhtml_legend=1 00:25:46.638 --rc geninfo_all_blocks=1 00:25:46.638 --rc geninfo_unexecuted_blocks=1 00:25:46.638 00:25:46.638 ' 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:46.638 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:46.638 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:46.639 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:46.639 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:46.639 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:46.639 Cannot find device "nvmf_init_br" 00:25:46.639 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:25:46.639 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:46.639 Cannot find device "nvmf_init_br2" 00:25:46.639 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:25:46.639 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:46.639 Cannot find device "nvmf_tgt_br" 00:25:46.897 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:25:46.897 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:46.897 Cannot find device "nvmf_tgt_br2" 00:25:46.897 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:25:46.897 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:46.897 Cannot find device "nvmf_init_br" 00:25:46.897 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:46.898 Cannot find device "nvmf_init_br2" 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:46.898 Cannot find device "nvmf_tgt_br" 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:46.898 Cannot find device "nvmf_tgt_br2" 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:46.898 Cannot find device "nvmf_br" 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:46.898 Cannot find device "nvmf_init_if" 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:46.898 Cannot find device "nvmf_init_if2" 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:46.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:46.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:46.898 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:47.158 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:47.158 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:25:47.158 00:25:47.158 --- 10.0.0.3 ping statistics --- 00:25:47.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.158 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:47.158 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:47.158 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:25:47.158 00:25:47.158 --- 10.0.0.4 ping statistics --- 00:25:47.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.158 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:47.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:25:47.158 00:25:47.158 --- 10.0.0.1 ping statistics --- 00:25:47.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.158 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:47.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:25:47.158 00:25:47.158 --- 10.0.0.2 ping statistics --- 00:25:47.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.158 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # return 0 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # nvmfpid=86918 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # waitforlisten 86918 00:25:47.158 09:03:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:47.158 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 86918 ']' 00:25:47.158 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.158 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:47.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.158 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.158 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:47.158 09:03:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:47.158 [2024-09-28 09:03:25.124965] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:47.158 [2024-09-28 09:03:25.125128] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.417 [2024-09-28 09:03:25.299951] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:47.676 [2024-09-28 09:03:25.533574] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.676 [2024-09-28 09:03:25.533653] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.676 [2024-09-28 09:03:25.533686] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.676 [2024-09-28 09:03:25.533702] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.676 [2024-09-28 09:03:25.533732] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.676 [2024-09-28 09:03:25.533933] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.676 [2024-09-28 09:03:25.534103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.936 [2024-09-28 09:03:25.695541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:48.194 09:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:48.194 09:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:25:48.194 09:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:25:48.194 09:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:48.194 09:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:48.194 09:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:48.194 09:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=86918 00:25:48.194 09:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:48.451 [2024-09-28 09:03:26.390589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.451 09:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:49.019 Malloc0 00:25:49.019 09:03:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:49.279 09:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:49.538 09:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:49.798 [2024-09-28 09:03:27.540555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:49.798 09:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:49.798 [2024-09-28 09:03:27.768674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:49.798 09:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:49.798 09:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=86978 00:25:49.798 09:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:49.798 09:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 86978 /var/tmp/bdevperf.sock 00:25:49.798 09:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 86978 ']' 00:25:49.798 09:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:49.798 09:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:49.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:49.798 09:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:49.798 09:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:49.798 09:03:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:51.178 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:51.178 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:25:51.178 09:03:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:51.178 09:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:51.443 Nvme0n1 00:25:51.443 09:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:52.054 Nvme0n1 00:25:52.054 09:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:25:52.054 09:03:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:52.991 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:25:52.991 09:03:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:53.250 09:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:53.509 09:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:25:53.509 09:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87019 00:25:53.509 09:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86918 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:53.509 09:03:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:00.070 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:00.070 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:00.070 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:00.070 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:00.070 Attaching 4 probes... 00:26:00.070 @path[10.0.0.3, 4421]: 16099 00:26:00.070 @path[10.0.0.3, 4421]: 16429 00:26:00.070 @path[10.0.0.3, 4421]: 16324 00:26:00.070 @path[10.0.0.3, 4421]: 16272 00:26:00.070 @path[10.0.0.3, 4421]: 16336 00:26:00.070 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:00.070 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:00.070 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:00.070 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:00.070 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:00.070 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:00.070 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87019 00:26:00.070 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:00.070 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:26:00.071 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:00.071 09:03:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:26:00.329 09:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:26:00.329 09:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87137 00:26:00.329 09:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86918 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:00.329 09:03:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:06.892 Attaching 4 probes... 00:26:06.892 @path[10.0.0.3, 4420]: 16265 00:26:06.892 @path[10.0.0.3, 4420]: 16344 00:26:06.892 @path[10.0.0.3, 4420]: 16616 00:26:06.892 @path[10.0.0.3, 4420]: 16577 00:26:06.892 @path[10.0.0.3, 4420]: 16410 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87137 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:26:06.892 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:07.152 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:26:07.152 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87251 00:26:07.152 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86918 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:07.152 09:03:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:13.722 09:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:13.722 09:03:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:13.722 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:13.722 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:13.723 Attaching 4 probes... 00:26:13.723 @path[10.0.0.3, 4421]: 11758 00:26:13.723 @path[10.0.0.3, 4421]: 16129 00:26:13.723 @path[10.0.0.3, 4421]: 16100 00:26:13.723 @path[10.0.0.3, 4421]: 16219 00:26:13.723 @path[10.0.0.3, 4421]: 15983 00:26:13.723 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:13.723 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:13.723 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:13.723 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:13.723 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:13.723 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:13.723 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87251 00:26:13.723 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:13.723 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:26:13.723 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:26:13.723 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:26:13.981 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:26:13.981 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86918 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:13.981 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87358 00:26:13.981 09:03:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:20.546 09:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:20.546 09:03:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:26:20.546 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:26:20.546 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:20.546 Attaching 4 probes... 00:26:20.546 00:26:20.546 00:26:20.546 00:26:20.546 00:26:20.546 00:26:20.546 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:20.547 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:20.547 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:20.547 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:26:20.547 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:26:20.547 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:26:20.547 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87358 00:26:20.547 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:20.547 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:26:20.547 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:26:20.547 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:20.806 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:26:20.806 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87476 00:26:20.806 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86918 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:20.806 09:03:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:27.370 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:27.371 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:27.371 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:27.371 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:27.371 Attaching 4 probes... 00:26:27.371 @path[10.0.0.3, 4421]: 15550 00:26:27.371 @path[10.0.0.3, 4421]: 15935 00:26:27.371 @path[10.0.0.3, 4421]: 15995 00:26:27.371 @path[10.0.0.3, 4421]: 16099 00:26:27.371 @path[10.0.0.3, 4421]: 15974 00:26:27.371 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:27.371 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:27.371 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:27.371 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:27.371 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:27.371 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:27.371 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87476 00:26:27.371 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:27.371 09:04:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:27.371 09:04:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:26:28.312 09:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:26:28.312 09:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87594 00:26:28.312 09:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86918 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:28.312 09:04:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:34.881 09:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:34.881 09:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:26:34.881 09:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:26:34.881 09:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:34.881 Attaching 4 probes... 00:26:34.881 @path[10.0.0.3, 4420]: 15423 00:26:34.881 @path[10.0.0.3, 4420]: 15848 00:26:34.881 @path[10.0.0.3, 4420]: 15829 00:26:34.881 @path[10.0.0.3, 4420]: 15810 00:26:34.881 @path[10.0.0.3, 4420]: 15848 00:26:34.881 09:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:34.881 09:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:34.881 09:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:34.881 09:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:26:34.881 09:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:26:34.881 09:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:26:34.881 09:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87594 00:26:34.881 09:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:34.881 09:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:34.881 [2024-09-28 09:04:12.779879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:34.881 09:04:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:35.141 09:04:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:26:41.739 09:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:26:41.739 09:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87769 00:26:41.739 09:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:41.739 09:04:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86918 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:48.304 Attaching 4 probes... 00:26:48.304 @path[10.0.0.3, 4421]: 15435 00:26:48.304 @path[10.0.0.3, 4421]: 15693 00:26:48.304 @path[10.0.0.3, 4421]: 15764 00:26:48.304 @path[10.0.0.3, 4421]: 15877 00:26:48.304 @path[10.0.0.3, 4421]: 15855 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87769 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 86978 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 86978 ']' 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 86978 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86978 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:48.304 killing process with pid 86978 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86978' 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 86978 00:26:48.304 09:04:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 86978 00:26:48.304 { 00:26:48.304 "results": [ 00:26:48.304 { 00:26:48.304 "job": "Nvme0n1", 00:26:48.304 "core_mask": "0x4", 00:26:48.304 "workload": "verify", 00:26:48.304 "status": "terminated", 00:26:48.304 "verify_range": { 00:26:48.304 "start": 0, 00:26:48.304 "length": 16384 00:26:48.304 }, 00:26:48.304 "queue_depth": 128, 00:26:48.304 "io_size": 4096, 00:26:48.304 "runtime": 55.50476, 00:26:48.304 "iops": 6828.135100485076, 00:26:48.304 "mibps": 26.672402736269827, 00:26:48.304 "io_failed": 0, 00:26:48.304 "io_timeout": 0, 00:26:48.304 "avg_latency_us": 18720.160726209622, 00:26:48.304 "min_latency_us": 495.24363636363637, 00:26:48.304 "max_latency_us": 7046430.72 00:26:48.304 } 00:26:48.304 ], 00:26:48.304 "core_count": 1 00:26:48.304 } 00:26:48.304 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 86978 00:26:48.304 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:48.572 [2024-09-28 09:03:27.867766] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:26:48.572 [2024-09-28 09:03:27.867934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86978 ] 00:26:48.572 [2024-09-28 09:03:28.018310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.572 [2024-09-28 09:03:28.175273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.572 [2024-09-28 09:03:28.329524] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:48.572 [2024-09-28 09:03:29.743732] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:26:48.572 Running I/O for 90 seconds... 00:26:48.572 8168.00 IOPS, 31.91 MiB/s 8223.50 IOPS, 32.12 MiB/s 8226.33 IOPS, 32.13 MiB/s 8223.75 IOPS, 32.12 MiB/s 8212.60 IOPS, 32.08 MiB/s 8199.83 IOPS, 32.03 MiB/s 8194.14 IOPS, 32.01 MiB/s 8179.38 IOPS, 31.95 MiB/s [2024-09-28 09:03:38.094866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.572 [2024-09-28 09:03:38.094957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:48.572 [2024-09-28 09:03:38.095038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.572 [2024-09-28 09:03:38.095066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:48.572 [2024-09-28 09:03:38.095095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.572 [2024-09-28 09:03:38.095115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:48.572 [2024-09-28 09:03:38.095142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.572 [2024-09-28 09:03:38.095161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:48.572 [2024-09-28 09:03:38.095187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.572 [2024-09-28 09:03:38.095205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:48.572 [2024-09-28 09:03:38.095230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.572 [2024-09-28 09:03:38.095249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:48.572 [2024-09-28 09:03:38.095275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.572 [2024-09-28 09:03:38.095293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:48.572 [2024-09-28 09:03:38.095319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.572 [2024-09-28 09:03:38.095337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:48.572 [2024-09-28 09:03:38.095362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.572 [2024-09-28 09:03:38.095381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:48.572 [2024-09-28 09:03:38.095435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.572 [2024-09-28 09:03:38.095456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:48.572 [2024-09-28 09:03:38.095482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.572 [2024-09-28 09:03:38.095501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:48.572 [2024-09-28 09:03:38.095527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.572 [2024-09-28 09:03:38.095546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:48.572 [2024-09-28 09:03:38.095572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.572 [2024-09-28 09:03:38.095591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.095618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.095637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.095663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.095682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.095707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.095726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.095752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.095772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.095797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.095816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.095862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.095883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.095909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.095928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.095954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.095974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.096939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.096976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.097005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.097027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.097069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.097111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.097157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.097178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.097207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.097242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.097284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.097304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.097331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.097361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.097390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.097411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.097439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.573 [2024-09-28 09:03:38.097459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.097510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.097536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.097565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.097586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.097614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.097634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:48.573 [2024-09-28 09:03:38.097660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.573 [2024-09-28 09:03:38.097679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.097707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.097727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.097753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.097773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.097799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.097834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.097875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.097900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.097928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.097949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.097976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.097997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.098061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.098109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.098158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.098205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.098266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.098326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.098374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.098420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.098468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.098515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.098562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.098608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.098665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.098712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.098758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.098818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.098884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.098932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.098959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.098980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.099007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.099035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.099063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.099083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.099110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.099131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.099158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.099178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.099206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.099240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.099267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.099295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.099323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.099344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.099371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.099391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.099417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.099437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.099464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.099484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.099510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.574 [2024-09-28 09:03:38.099530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.099557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.099577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.099603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.099623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.574 [2024-09-28 09:03:38.099649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.574 [2024-09-28 09:03:38.099669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.099696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.575 [2024-09-28 09:03:38.099716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.099742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.575 [2024-09-28 09:03:38.099762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.099788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.575 [2024-09-28 09:03:38.099808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.099847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.575 [2024-09-28 09:03:38.099877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.099906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.575 [2024-09-28 09:03:38.099927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.099953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.575 [2024-09-28 09:03:38.099974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.100001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.575 [2024-09-28 09:03:38.100021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.100047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.575 [2024-09-28 09:03:38.100068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.100095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.575 [2024-09-28 09:03:38.100115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.100142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.575 [2024-09-28 09:03:38.100161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.100187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.575 [2024-09-28 09:03:38.100207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.100234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.575 [2024-09-28 09:03:38.100255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.101996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.575 [2024-09-28 09:03:38.102036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.102837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.102862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.103162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.103196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.103244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.103265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.103294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.103314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.103340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.103359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.103386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.103406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.103432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.103452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.103478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.103498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.103524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.103544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:38.103571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:38.103591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:48.575 8135.11 IOPS, 31.78 MiB/s 8142.40 IOPS, 31.81 MiB/s 8145.45 IOPS, 31.82 MiB/s 8155.33 IOPS, 31.86 MiB/s 8170.46 IOPS, 31.92 MiB/s 8173.71 IOPS, 31.93 MiB/s [2024-09-28 09:03:44.684647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:44.684713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:48.575 [2024-09-28 09:03:44.684861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.575 [2024-09-28 09:03:44.684894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.684927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.576 [2024-09-28 09:03:44.684969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.576 [2024-09-28 09:03:44.685038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.576 [2024-09-28 09:03:44.685087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.576 [2024-09-28 09:03:44.685164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.576 [2024-09-28 09:03:44.685223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.576 [2024-09-28 09:03:44.685268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.576 [2024-09-28 09:03:44.685320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.576 [2024-09-28 09:03:44.685365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.576 [2024-09-28 09:03:44.685408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.576 [2024-09-28 09:03:44.685467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.576 [2024-09-28 09:03:44.685513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.576 [2024-09-28 09:03:44.685558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.576 [2024-09-28 09:03:44.685603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:49272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.576 [2024-09-28 09:03:44.685663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:48704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.685710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.685758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:48720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.685817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.685861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.685925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.685970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.685996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:48752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.686015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.686043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:48760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.686063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.686089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:48768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.686108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.686133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:48776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.686152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.686178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:48784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.686197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.686232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:48792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.686252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.686278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:48800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.686297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.686323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.686342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.686367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:48816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.686386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.686412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:48824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.686431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.686458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.686476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.686502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.686521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.686547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:48848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.686567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.686592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.576 [2024-09-28 09:03:44.686611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:48.576 [2024-09-28 09:03:44.686637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.577 [2024-09-28 09:03:44.686656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.686682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.577 [2024-09-28 09:03:44.686701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.686726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:48880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.577 [2024-09-28 09:03:44.686746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.686773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.577 [2024-09-28 09:03:44.686800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.686886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.686912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.686940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.686980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.687028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.687075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.687121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.687167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.687227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:49336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.687272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.687317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.687362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:49360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.687406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.687469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.577 [2024-09-28 09:03:44.687516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.577 [2024-09-28 09:03:44.687561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:48912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.577 [2024-09-28 09:03:44.687606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:48920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.577 [2024-09-28 09:03:44.687651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.577 [2024-09-28 09:03:44.687696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.577 [2024-09-28 09:03:44.687740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.577 [2024-09-28 09:03:44.687784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.577 [2024-09-28 09:03:44.687843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.687894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.687938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.687964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.687984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.688010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.688028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.688063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:49408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.688084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.688110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.688129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.688171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:49424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.688191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.688219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.688240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.688266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.688285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.688311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.688331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.688357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.688377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.688404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.688424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.688470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.577 [2024-09-28 09:03:44.688496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:48.577 [2024-09-28 09:03:44.688524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.688544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.688570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:49488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.688589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.688615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.688635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.688670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.688691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.688717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.688736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.688763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.688792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.688855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.688877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.688905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.688926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.688954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.688974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.689022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:48984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.689071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.689120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.689182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:49008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.689243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.689295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.689357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.689407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.689452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.689498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.689544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.689590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.689636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.689681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.689728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.689773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.689818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.689883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.689937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.689966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.689986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.690013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.690032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.690061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.578 [2024-09-28 09:03:44.690082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.690108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.690127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.690153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.690188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.690215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.690235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.690261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.690281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.690307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.690327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.690353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.690372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.690398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.690418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.690444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.578 [2024-09-28 09:03:44.690463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:48.578 [2024-09-28 09:03:44.690489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:44.690508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.690544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:44.690565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.690592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:44.690611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.690637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:44.690656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.690682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:44.690701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.690728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:44.690747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.690774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:44.690794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.691515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:44.691549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.691592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:44.691614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.691648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:44.691668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.691702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:44.691721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.691756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:44.691775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.691824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:44.691847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.691895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:44.691917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.691951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:44.691971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:44.692024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:44.692049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:48.579 8051.73 IOPS, 31.45 MiB/s 7643.19 IOPS, 29.86 MiB/s 7671.24 IOPS, 29.97 MiB/s 7693.06 IOPS, 30.05 MiB/s 7713.84 IOPS, 30.13 MiB/s 7727.35 IOPS, 30.18 MiB/s 7741.10 IOPS, 30.24 MiB/s [2024-09-28 09:03:51.764078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.764160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.764262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.764311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.764373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.764417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.764461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.764506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.764549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.764617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.764665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.764708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.764753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:51.764840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:51.764892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:51.764939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.764965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:51.764984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.765010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:51.765030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.765058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:51.765078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.765104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:51.765123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.765164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.579 [2024-09-28 09:03:51.765183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.765209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.765228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.765265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.765286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:48.579 [2024-09-28 09:03:51.765312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.579 [2024-09-28 09:03:51.765331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.765358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.765376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.765409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.765430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.765456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.765475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.765500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.765519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.765545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.765564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.765590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.765609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.765634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.765653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.765679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.765698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.765723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.765742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.765768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.765787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.765835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.765859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.765886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.765906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.765931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.765950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.765976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.765994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.766039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.766083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.766127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.580 [2024-09-28 09:03:51.766172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.580 [2024-09-28 09:03:51.766234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.580 [2024-09-28 09:03:51.766282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.580 [2024-09-28 09:03:51.766326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.580 [2024-09-28 09:03:51.766371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.580 [2024-09-28 09:03:51.766425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.580 [2024-09-28 09:03:51.766473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.580 [2024-09-28 09:03:51.766520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.766600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.766648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.766693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.766737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.766781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.766844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.766890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.766936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.766961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.766980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.767006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.767036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.767064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.767084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.767109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.767128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.767153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.767173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.767198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.580 [2024-09-28 09:03:51.767218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:48.580 [2024-09-28 09:03:51.767243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.581 [2024-09-28 09:03:51.767262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.767288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.581 [2024-09-28 09:03:51.767307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.767334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.767353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.767379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.767398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.767424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.767443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.767473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.767493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.767519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.767539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.767564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.767584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.767618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.767638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.767699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.767720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.767745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.767764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.767790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.767824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.767869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.767889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.767915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.767935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.767961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.767981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.768026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.768071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.768117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.581 [2024-09-28 09:03:51.768163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.581 [2024-09-28 09:03:51.768223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.581 [2024-09-28 09:03:51.768279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.581 [2024-09-28 09:03:51.768325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.581 [2024-09-28 09:03:51.768371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.581 [2024-09-28 09:03:51.768419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.581 [2024-09-28 09:03:51.768465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.581 [2024-09-28 09:03:51.768510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.768553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.768598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.768642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.768686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.768730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.768775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.768876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.768923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.581 [2024-09-28 09:03:51.768969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:48.581 [2024-09-28 09:03:51.768995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.582 [2024-09-28 09:03:51.769015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.769041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.582 [2024-09-28 09:03:51.769060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.769087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.582 [2024-09-28 09:03:51.769106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.769133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.582 [2024-09-28 09:03:51.769153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.769179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.582 [2024-09-28 09:03:51.769202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.769244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.582 [2024-09-28 09:03:51.769263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.582 [2024-09-28 09:03:51.770157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.770220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.770288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.770353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.770408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.770458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.770510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.770562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.770639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.770692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.770744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.770794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.770866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.770920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.770953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.770973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.771005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.771034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.771085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.771110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.771142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.771163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.771195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.771215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.771247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.771266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.771298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.771317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.771349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.771368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.771399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.771418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.771450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.771469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:03:51.771501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:03:51.771520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:48.582 7701.95 IOPS, 30.09 MiB/s 7367.09 IOPS, 28.78 MiB/s 7060.12 IOPS, 27.58 MiB/s 6777.72 IOPS, 26.48 MiB/s 6517.04 IOPS, 25.46 MiB/s 6275.67 IOPS, 24.51 MiB/s 6051.54 IOPS, 23.64 MiB/s 5871.59 IOPS, 22.94 MiB/s 5935.87 IOPS, 23.19 MiB/s 6000.65 IOPS, 23.44 MiB/s 6065.12 IOPS, 23.69 MiB/s 6123.03 IOPS, 23.92 MiB/s 6178.00 IOPS, 24.13 MiB/s 6226.63 IOPS, 24.32 MiB/s [2024-09-28 09:04:05.185058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:04:05.185139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:04:05.185244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:04:05.185284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:04:05.185315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:04:05.185334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:04:05.185359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:04:05.185378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:04:05.185402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:04:05.185421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:04:05.185445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:04:05.185463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:04:05.185488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.582 [2024-09-28 09:04:05.185506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:48.582 [2024-09-28 09:04:05.185531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.185549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.185574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.185593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.185618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.185636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.185660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.185677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.185702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.185720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.185745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.185763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.185787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.185814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.185842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.185880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.185908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.185927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.185952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.185970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.185996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.186017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.186060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.186104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.186147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.186191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.186234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.186278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.583 [2024-09-28 09:04:05.186917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.186950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.186967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.187000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.187019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.187035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.187052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.187068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.187085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.187101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.187118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.187134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.583 [2024-09-28 09:04:05.187151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.583 [2024-09-28 09:04:05.187166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.187516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.187548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.187581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.187613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.187645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.187677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.187708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.187741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.187980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.187995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.584 [2024-09-28 09:04:05.188028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.584 [2024-09-28 09:04:05.188524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.584 [2024-09-28 09:04:05.188541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.585 [2024-09-28 09:04:05.188556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.188573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.188589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.188612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.188629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.188646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.188661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.188678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.188693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.188710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.188725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.188742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.188757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.188774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.188840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.188879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.188896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.188914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.188939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.188958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.188976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.188995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.585 [2024-09-28 09:04:05.189617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bc80 is same with the state(6) to be set 00:26:48.585 [2024-09-28 09:04:05.189656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.585 [2024-09-28 09:04:05.189670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.585 [2024-09-28 09:04:05.189684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58168 len:8 PRP1 0x0 PRP2 0x0 00:26:48.585 [2024-09-28 09:04:05.189699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.585 [2024-09-28 09:04:05.189731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.585 [2024-09-28 09:04:05.189744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58176 len:8 PRP1 0x0 PRP2 0x0 00:26:48.585 [2024-09-28 09:04:05.189758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.585 [2024-09-28 09:04:05.189802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.585 [2024-09-28 09:04:05.189814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58184 len:8 PRP1 0x0 PRP2 0x0 00:26:48.585 [2024-09-28 09:04:05.189829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.585 [2024-09-28 09:04:05.189854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.585 [2024-09-28 09:04:05.189867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58192 len:8 PRP1 0x0 PRP2 0x0 00:26:48.585 [2024-09-28 09:04:05.189897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.585 [2024-09-28 09:04:05.189914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.585 [2024-09-28 09:04:05.189926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.585 [2024-09-28 09:04:05.189938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58200 len:8 PRP1 0x0 PRP2 0x0 00:26:48.586 [2024-09-28 09:04:05.189953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.586 [2024-09-28 09:04:05.189968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.586 [2024-09-28 09:04:05.189979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.586 [2024-09-28 09:04:05.189991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58592 len:8 PRP1 0x0 PRP2 0x0 00:26:48.586 [2024-09-28 09:04:05.190006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.586 [2024-09-28 09:04:05.190021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.586 [2024-09-28 09:04:05.190033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.586 [2024-09-28 09:04:05.190045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58600 len:8 PRP1 0x0 PRP2 0x0 00:26:48.586 [2024-09-28 09:04:05.190059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.586 [2024-09-28 09:04:05.190082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.586 [2024-09-28 09:04:05.190095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.586 [2024-09-28 09:04:05.190107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58608 len:8 PRP1 0x0 PRP2 0x0 00:26:48.586 [2024-09-28 09:04:05.190122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.586 [2024-09-28 09:04:05.190137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.586 [2024-09-28 09:04:05.190148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.586 [2024-09-28 09:04:05.190160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58616 len:8 PRP1 0x0 PRP2 0x0 00:26:48.586 [2024-09-28 09:04:05.190175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.586 [2024-09-28 09:04:05.190189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.586 [2024-09-28 09:04:05.190203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.586 [2024-09-28 09:04:05.190216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58624 len:8 PRP1 0x0 PRP2 0x0 00:26:48.586 [2024-09-28 09:04:05.190230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.586 [2024-09-28 09:04:05.190245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.586 [2024-09-28 09:04:05.190256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.586 [2024-09-28 09:04:05.190269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58632 len:8 PRP1 0x0 PRP2 0x0 00:26:48.586 [2024-09-28 09:04:05.190283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.586 [2024-09-28 09:04:05.190298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.586 [2024-09-28 09:04:05.190309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.586 [2024-09-28 09:04:05.190321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58640 len:8 PRP1 0x0 PRP2 0x0 00:26:48.586 [2024-09-28 09:04:05.190336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.586 [2024-09-28 09:04:05.190350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:48.586 [2024-09-28 09:04:05.190362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:48.586 [2024-09-28 09:04:05.190373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58648 len:8 PRP1 0x0 PRP2 0x0 00:26:48.586 [2024-09-28 09:04:05.190388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.586 [2024-09-28 09:04:05.190609] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002bc80 was disconnected and freed. reset controller. 00:26:48.586 [2024-09-28 09:04:05.190758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.586 [2024-09-28 09:04:05.190789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.586 [2024-09-28 09:04:05.190826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.586 [2024-09-28 09:04:05.190844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.586 [2024-09-28 09:04:05.190861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.586 [2024-09-28 09:04:05.190886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.586 [2024-09-28 09:04:05.190904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.586 [2024-09-28 09:04:05.190919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.586 [2024-09-28 09:04:05.190936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.586 [2024-09-28 09:04:05.190955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.586 [2024-09-28 09:04:05.190980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:26:48.586 [2024-09-28 09:04:05.192129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.586 [2024-09-28 09:04:05.192183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:26:48.586 [2024-09-28 09:04:05.192600] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.586 [2024-09-28 09:04:05.192652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b280 with addr=10.0.0.3, port=4421 00:26:48.586 [2024-09-28 09:04:05.192682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:26:48.586 [2024-09-28 09:04:05.192725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:26:48.586 [2024-09-28 09:04:05.192770] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:48.586 [2024-09-28 09:04:05.192848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:48.586 [2024-09-28 09:04:05.192871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:48.586 [2024-09-28 09:04:05.192929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:48.586 [2024-09-28 09:04:05.192954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:48.586 6269.47 IOPS, 24.49 MiB/s 6307.38 IOPS, 24.64 MiB/s 6348.13 IOPS, 24.80 MiB/s 6388.23 IOPS, 24.95 MiB/s 6425.32 IOPS, 25.10 MiB/s 6462.17 IOPS, 25.24 MiB/s 6497.83 IOPS, 25.38 MiB/s 6525.88 IOPS, 25.49 MiB/s 6555.93 IOPS, 25.61 MiB/s 6585.53 IOPS, 25.72 MiB/s [2024-09-28 09:04:15.274467] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:48.586 6615.35 IOPS, 25.84 MiB/s 6643.45 IOPS, 25.95 MiB/s 6670.71 IOPS, 26.06 MiB/s 6696.20 IOPS, 26.16 MiB/s 6715.40 IOPS, 26.23 MiB/s 6738.55 IOPS, 26.32 MiB/s 6759.88 IOPS, 26.41 MiB/s 6781.32 IOPS, 26.49 MiB/s 6801.22 IOPS, 26.57 MiB/s 6821.85 IOPS, 26.65 MiB/s Received shutdown signal, test time was about 55.505671 seconds 00:26:48.586 00:26:48.586 Latency(us) 00:26:48.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.586 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:48.586 Verification LBA range: start 0x0 length 0x4000 00:26:48.586 Nvme0n1 : 55.50 6828.14 26.67 0.00 0.00 18720.16 495.24 7046430.72 00:26:48.586 =================================================================================================================== 00:26:48.586 Total : 6828.14 26.67 0.00 0.00 18720.16 495.24 7046430.72 00:26:48.586 [2024-09-28 09:04:25.395725] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:26:48.586 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:48.586 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:26:48.586 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:48.586 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:26:48.586 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:26:48.586 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:48.846 rmmod nvme_tcp 00:26:48.846 rmmod nvme_fabrics 00:26:48.846 rmmod nvme_keyring 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@513 -- # '[' -n 86918 ']' 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # killprocess 86918 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 86918 ']' 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 86918 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86918 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:48.846 killing process with pid 86918 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86918' 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 86918 00:26:48.846 09:04:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 86918 00:26:49.782 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:26:49.782 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:26:49.782 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:26:49.782 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:26:49.782 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-save 00:26:49.782 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:26:49.782 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:26:49.782 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:49.782 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:49.782 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:49.782 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:26:50.041 00:26:50.041 real 1m3.587s 00:26:50.041 user 2m55.958s 00:26:50.041 sys 0m17.129s 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:50.041 ************************************ 00:26:50.041 END TEST nvmf_host_multipath 00:26:50.041 ************************************ 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:50.041 09:04:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.041 ************************************ 00:26:50.041 START TEST nvmf_timeout 00:26:50.041 ************************************ 00:26:50.041 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:50.301 * Looking for test storage... 00:26:50.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:50.301 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:26:50.301 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:26:50.301 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:26:50.301 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:26:50.301 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:50.301 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:50.301 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:50.301 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:50.301 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:26:50.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.302 --rc genhtml_branch_coverage=1 00:26:50.302 --rc genhtml_function_coverage=1 00:26:50.302 --rc genhtml_legend=1 00:26:50.302 --rc geninfo_all_blocks=1 00:26:50.302 --rc geninfo_unexecuted_blocks=1 00:26:50.302 00:26:50.302 ' 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:26:50.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.302 --rc genhtml_branch_coverage=1 00:26:50.302 --rc genhtml_function_coverage=1 00:26:50.302 --rc genhtml_legend=1 00:26:50.302 --rc geninfo_all_blocks=1 00:26:50.302 --rc geninfo_unexecuted_blocks=1 00:26:50.302 00:26:50.302 ' 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:26:50.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.302 --rc genhtml_branch_coverage=1 00:26:50.302 --rc genhtml_function_coverage=1 00:26:50.302 --rc genhtml_legend=1 00:26:50.302 --rc geninfo_all_blocks=1 00:26:50.302 --rc geninfo_unexecuted_blocks=1 00:26:50.302 00:26:50.302 ' 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:26:50.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.302 --rc genhtml_branch_coverage=1 00:26:50.302 --rc genhtml_function_coverage=1 00:26:50.302 --rc genhtml_legend=1 00:26:50.302 --rc geninfo_all_blocks=1 00:26:50.302 --rc geninfo_unexecuted_blocks=1 00:26:50.302 00:26:50.302 ' 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:50.302 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:26:50.302 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:50.303 Cannot find device "nvmf_init_br" 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:50.303 Cannot find device "nvmf_init_br2" 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:50.303 Cannot find device "nvmf_tgt_br" 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:50.303 Cannot find device "nvmf_tgt_br2" 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:50.303 Cannot find device "nvmf_init_br" 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:26:50.303 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:50.562 Cannot find device "nvmf_init_br2" 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:50.562 Cannot find device "nvmf_tgt_br" 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:50.562 Cannot find device "nvmf_tgt_br2" 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:50.562 Cannot find device "nvmf_br" 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:50.562 Cannot find device "nvmf_init_if" 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:50.562 Cannot find device "nvmf_init_if2" 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:50.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:50.562 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:50.562 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:50.822 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:50.822 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:26:50.822 00:26:50.822 --- 10.0.0.3 ping statistics --- 00:26:50.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.822 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:50.822 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:50.822 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:26:50.822 00:26:50.822 --- 10.0.0.4 ping statistics --- 00:26:50.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.822 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:50.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:50.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:26:50.822 00:26:50.822 --- 10.0.0.1 ping statistics --- 00:26:50.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.822 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:50.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:50.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:26:50.822 00:26:50.822 --- 10.0.0.2 ping statistics --- 00:26:50.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:50.822 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # return 0 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # nvmfpid=88145 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # waitforlisten 88145 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 88145 ']' 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:50.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:50.822 09:04:28 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.822 [2024-09-28 09:04:28.738142] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:26:50.822 [2024-09-28 09:04:28.738300] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.081 [2024-09-28 09:04:28.913894] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:51.339 [2024-09-28 09:04:29.145210] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.339 [2024-09-28 09:04:29.145295] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.339 [2024-09-28 09:04:29.145330] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.339 [2024-09-28 09:04:29.145347] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.339 [2024-09-28 09:04:29.145364] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.339 [2024-09-28 09:04:29.145571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.339 [2024-09-28 09:04:29.145584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.339 [2024-09-28 09:04:29.319933] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:51.906 09:04:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:51.906 09:04:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:51.906 09:04:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:26:51.906 09:04:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:51.906 09:04:29 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:51.906 09:04:29 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.906 09:04:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:51.906 09:04:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:52.165 [2024-09-28 09:04:30.035513] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.165 09:04:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:52.424 Malloc0 00:26:52.682 09:04:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:52.941 09:04:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:52.941 09:04:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:53.200 [2024-09-28 09:04:31.127642] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:53.200 09:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=88200 00:26:53.200 09:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:53.200 09:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 88200 /var/tmp/bdevperf.sock 00:26:53.200 09:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 88200 ']' 00:26:53.200 09:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:53.200 09:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:53.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:53.200 09:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:53.200 09:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:53.200 09:04:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:53.458 [2024-09-28 09:04:31.253999] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:26:53.458 [2024-09-28 09:04:31.254190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88200 ] 00:26:53.458 [2024-09-28 09:04:31.417129] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.717 [2024-09-28 09:04:31.627478] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.975 [2024-09-28 09:04:31.785212] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:54.234 09:04:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:54.234 09:04:32 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:54.234 09:04:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:54.493 09:04:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:54.751 NVMe0n1 00:26:54.751 09:04:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=88222 00:26:54.751 09:04:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:54.751 09:04:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:26:55.009 Running I/O for 10 seconds... 00:26:55.947 09:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:55.947 6549.00 IOPS, 25.58 MiB/s [2024-09-28 09:04:33.840938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.947 [2024-09-28 09:04:33.841354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.841994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:55.948 [2024-09-28 09:04:33.842568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.842608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.842644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.842660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.842679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.842693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.842711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.842724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.842742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.842755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.842778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.842792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.842811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.842824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.842859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.842874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.842893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.842907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.842924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.842938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.842956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.842970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.842988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.843972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.949 [2024-09-28 09:04:33.843986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.949 [2024-09-28 09:04:33.844003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.844974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.844988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.950 [2024-09-28 09:04:33.845491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.950 [2024-09-28 09:04:33.845504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.845978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.845992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.846023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.846053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.846083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.846117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.846147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.846178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.846208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.846255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.951 [2024-09-28 09:04:33.846288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.951 [2024-09-28 09:04:33.846319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.951 [2024-09-28 09:04:33.846350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.951 [2024-09-28 09:04:33.846381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.951 [2024-09-28 09:04:33.846411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.951 [2024-09-28 09:04:33.846441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.951 [2024-09-28 09:04:33.846471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.951 [2024-09-28 09:04:33.846504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.951 [2024-09-28 09:04:33.846534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.951 [2024-09-28 09:04:33.846564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.951 [2024-09-28 09:04:33.846581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.951 [2024-09-28 09:04:33.846594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.952 [2024-09-28 09:04:33.846613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.952 [2024-09-28 09:04:33.846626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.952 [2024-09-28 09:04:33.846643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.952 [2024-09-28 09:04:33.846657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.952 [2024-09-28 09:04:33.846674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.952 [2024-09-28 09:04:33.846688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.952 [2024-09-28 09:04:33.846706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.952 [2024-09-28 09:04:33.846721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.952 [2024-09-28 09:04:33.846741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:55.952 [2024-09-28 09:04:33.846754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.952 [2024-09-28 09:04:33.846771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:55.952 [2024-09-28 09:04:33.846784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.952 [2024-09-28 09:04:33.846812] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:26:55.952 [2024-09-28 09:04:33.846834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:55.952 [2024-09-28 09:04:33.846853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:55.952 [2024-09-28 09:04:33.846867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57504 len:8 PRP1 0x0 PRP2 0x0 00:26:55.952 [2024-09-28 09:04:33.846883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:55.952 [2024-09-28 09:04:33.847124] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b280 was disconnected and freed. reset controller. 00:26:55.952 [2024-09-28 09:04:33.847399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.952 [2024-09-28 09:04:33.847517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:55.952 [2024-09-28 09:04:33.847665] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.952 [2024-09-28 09:04:33.847701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:55.952 [2024-09-28 09:04:33.847719] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:55.952 [2024-09-28 09:04:33.847755] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:55.952 [2024-09-28 09:04:33.847781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:55.952 [2024-09-28 09:04:33.847798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:55.952 [2024-09-28 09:04:33.847863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:55.952 [2024-09-28 09:04:33.847903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:55.952 [2024-09-28 09:04:33.847924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:55.952 09:04:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:26:58.083 3538.00 IOPS, 13.82 MiB/s 2358.67 IOPS, 9.21 MiB/s [2024-09-28 09:04:35.848055] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.083 [2024-09-28 09:04:35.848151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:58.083 [2024-09-28 09:04:35.848172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:58.083 [2024-09-28 09:04:35.848222] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:58.083 [2024-09-28 09:04:35.848251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:58.083 [2024-09-28 09:04:35.848268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:58.083 [2024-09-28 09:04:35.848283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:58.083 [2024-09-28 09:04:35.848323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:58.083 [2024-09-28 09:04:35.848341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:58.083 09:04:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:26:58.083 09:04:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:58.083 09:04:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:58.342 09:04:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:26:58.342 09:04:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:26:58.342 09:04:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:58.342 09:04:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:58.600 09:04:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:26:58.600 09:04:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:27:00.054 1769.00 IOPS, 6.91 MiB/s 1415.20 IOPS, 5.53 MiB/s [2024-09-28 09:04:37.848520] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.054 [2024-09-28 09:04:37.848617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:27:00.054 [2024-09-28 09:04:37.848640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:27:00.054 [2024-09-28 09:04:37.848676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:00.054 [2024-09-28 09:04:37.848705] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.054 [2024-09-28 09:04:37.848722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.054 [2024-09-28 09:04:37.848737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.054 [2024-09-28 09:04:37.848780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.054 [2024-09-28 09:04:37.848854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.953 1179.33 IOPS, 4.61 MiB/s 1010.86 IOPS, 3.95 MiB/s [2024-09-28 09:04:39.848950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.953 [2024-09-28 09:04:39.849021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.953 [2024-09-28 09:04:39.849057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.953 [2024-09-28 09:04:39.849071] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:01.953 [2024-09-28 09:04:39.849116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.889 884.50 IOPS, 3.46 MiB/s 00:27:02.889 Latency(us) 00:27:02.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.889 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:02.889 Verification LBA range: start 0x0 length 0x4000 00:27:02.889 NVMe0n1 : 8.10 873.89 3.41 15.81 0.00 143914.90 3932.16 7046430.72 00:27:02.889 =================================================================================================================== 00:27:02.889 Total : 873.89 3.41 15.81 0.00 143914.90 3932.16 7046430.72 00:27:02.889 { 00:27:02.889 "results": [ 00:27:02.889 { 00:27:02.889 "job": "NVMe0n1", 00:27:02.889 "core_mask": "0x4", 00:27:02.889 "workload": "verify", 00:27:02.889 "status": "finished", 00:27:02.889 "verify_range": { 00:27:02.889 "start": 0, 00:27:02.889 "length": 16384 00:27:02.889 }, 00:27:02.889 "queue_depth": 128, 00:27:02.889 "io_size": 4096, 00:27:02.889 "runtime": 8.097121, 00:27:02.889 "iops": 873.8908557745401, 00:27:02.889 "mibps": 3.4136361553692973, 00:27:02.889 "io_failed": 128, 00:27:02.889 "io_timeout": 0, 00:27:02.889 "avg_latency_us": 143914.8962495583, 00:27:02.889 "min_latency_us": 3932.16, 00:27:02.889 "max_latency_us": 7046430.72 00:27:02.889 } 00:27:02.889 ], 00:27:02.889 "core_count": 1 00:27:02.889 } 00:27:03.457 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:27:03.457 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:03.457 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:27:03.716 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:27:03.716 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:27:03.716 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:27:03.716 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:27:03.975 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:27:03.975 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 88222 00:27:03.975 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 88200 00:27:03.975 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 88200 ']' 00:27:03.975 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 88200 00:27:03.975 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:27:03.975 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:03.975 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88200 00:27:03.975 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:03.975 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:03.975 killing process with pid 88200 00:27:03.975 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88200' 00:27:03.975 Received shutdown signal, test time was about 9.158828 seconds 00:27:03.975 00:27:03.975 Latency(us) 00:27:03.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.975 =================================================================================================================== 00:27:03.975 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:03.975 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 88200 00:27:03.975 09:04:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 88200 00:27:04.911 09:04:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:05.168 [2024-09-28 09:04:43.149010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:05.426 09:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=88356 00:27:05.426 09:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:27:05.426 09:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 88356 /var/tmp/bdevperf.sock 00:27:05.426 09:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 88356 ']' 00:27:05.426 09:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:05.426 09:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:05.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:05.426 09:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:05.426 09:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:05.426 09:04:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:05.426 [2024-09-28 09:04:43.275217] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:27:05.426 [2024-09-28 09:04:43.275393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88356 ] 00:27:05.685 [2024-09-28 09:04:43.443700] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.685 [2024-09-28 09:04:43.600161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.944 [2024-09-28 09:04:43.751947] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:06.202 09:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.202 09:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:27:06.203 09:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:06.462 09:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:27:06.721 NVMe0n1 00:27:06.721 09:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=88374 00:27:06.721 09:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:06.721 09:04:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:27:06.980 Running I/O for 10 seconds... 00:27:07.917 09:04:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:08.177 6421.00 IOPS, 25.08 MiB/s [2024-09-28 09:04:45.939892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.177 [2024-09-28 09:04:45.940408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.940577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:59680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.940675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.940755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:59688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.940886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.940982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.941071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.941146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:59704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.941257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.941348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.941445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.941536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.941625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.941702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.941784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.941886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:59736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.941980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.942056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.942133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.942217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.942313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.942400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.942505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.942594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:59768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.942677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.942777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.942932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.943025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.943113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.943216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.177 [2024-09-28 09:04:45.943301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.177 [2024-09-28 09:04:45.943388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:59800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.943469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.943544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.943641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.943742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.943880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.943965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.944050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.944146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.944233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.944324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.944409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.944494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.944575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.944662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.944743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.944885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.944991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.945086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.945183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.945285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.945366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.945452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.945534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.945621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.945722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.945861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.945975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.946054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.946141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.946237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.946324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.946411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.946497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.946585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.946667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.946743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.946860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.946946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.947044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.947122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.947231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.947318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.947414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.947490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.947582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.947658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.947738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.947864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.947971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.948051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.948148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.948234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.948327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.948402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.948493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.948593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.948686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.948791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.948925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.949007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.949077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.949164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.949261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.949369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.949467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.949554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.949624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.949707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.949787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.949884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.949980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.950067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.950135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.950217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.950296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.950392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.950473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.178 [2024-09-28 09:04:45.950556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.178 [2024-09-28 09:04:45.950643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.950730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.950799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.950919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.951008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.951095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.951193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.951287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.951384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.951475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.951558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.951631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.951721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.951821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.951919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.952970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.952988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.953007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.953027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.953042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.953060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.953074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.953092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.953107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.953125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.953145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.953163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.179 [2024-09-28 09:04:45.953178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.179 [2024-09-28 09:04:45.953198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.953706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.953739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.953778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.953826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.953862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.953896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.953928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.953962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.953980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.953995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.954014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.954028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.954049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.954064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.954083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.954097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.954116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.954131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.954149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.954163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.954182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.954196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.954217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:08.180 [2024-09-28 09:04:45.954231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.954250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.180 [2024-09-28 09:04:45.954264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.954281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:27:08.180 [2024-09-28 09:04:45.954302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:08.180 [2024-09-28 09:04:45.954325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:08.180 [2024-09-28 09:04:45.954339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60568 len:8 PRP1 0x0 PRP2 0x0 00:27:08.180 [2024-09-28 09:04:45.954356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.954607] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b500 was disconnected and freed. reset controller. 00:27:08.180 [2024-09-28 09:04:45.954751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.180 [2024-09-28 09:04:45.954791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.954824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.180 [2024-09-28 09:04:45.954843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.954858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.180 [2024-09-28 09:04:45.954875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.954890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.180 [2024-09-28 09:04:45.954905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.180 [2024-09-28 09:04:45.954919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:08.180 [2024-09-28 09:04:45.955169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.180 [2024-09-28 09:04:45.955217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:08.180 [2024-09-28 09:04:45.955354] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:08.181 [2024-09-28 09:04:45.955400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:27:08.181 [2024-09-28 09:04:45.955419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:08.181 [2024-09-28 09:04:45.955452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:08.181 [2024-09-28 09:04:45.955478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:08.181 [2024-09-28 09:04:45.955498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:08.181 [2024-09-28 09:04:45.955515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:08.181 [2024-09-28 09:04:45.955551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.181 [2024-09-28 09:04:45.955570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.181 09:04:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:27:09.117 3722.00 IOPS, 14.54 MiB/s [2024-09-28 09:04:46.955713] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:09.117 [2024-09-28 09:04:46.955806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:27:09.117 [2024-09-28 09:04:46.955855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:09.117 [2024-09-28 09:04:46.955897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:09.117 [2024-09-28 09:04:46.955924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:09.117 [2024-09-28 09:04:46.955943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:09.117 [2024-09-28 09:04:46.955958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:09.117 [2024-09-28 09:04:46.955996] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:09.117 [2024-09-28 09:04:46.956029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:09.117 09:04:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:09.375 [2024-09-28 09:04:47.227868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:09.375 09:04:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 88374 00:27:10.199 2481.33 IOPS, 9.69 MiB/s [2024-09-28 09:04:47.969486] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:17.202 1861.00 IOPS, 7.27 MiB/s 2912.20 IOPS, 11.38 MiB/s 3865.67 IOPS, 15.10 MiB/s 4565.86 IOPS, 17.84 MiB/s 5083.12 IOPS, 19.86 MiB/s 5485.44 IOPS, 21.43 MiB/s 5804.70 IOPS, 22.67 MiB/s 00:27:17.202 Latency(us) 00:27:17.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.202 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:17.202 Verification LBA range: start 0x0 length 0x4000 00:27:17.202 NVMe0n1 : 10.01 5811.27 22.70 0.00 0.00 21990.86 1839.48 3035150.89 00:27:17.202 =================================================================================================================== 00:27:17.202 Total : 5811.27 22.70 0.00 0.00 21990.86 1839.48 3035150.89 00:27:17.202 { 00:27:17.202 "results": [ 00:27:17.202 { 00:27:17.202 "job": "NVMe0n1", 00:27:17.202 "core_mask": "0x4", 00:27:17.202 "workload": "verify", 00:27:17.202 "status": "finished", 00:27:17.202 "verify_range": { 00:27:17.202 "start": 0, 00:27:17.202 "length": 16384 00:27:17.202 }, 00:27:17.202 "queue_depth": 128, 00:27:17.202 "io_size": 4096, 00:27:17.202 "runtime": 10.012449, 00:27:17.202 "iops": 5811.265555509945, 00:27:17.202 "mibps": 22.700256076210724, 00:27:17.202 "io_failed": 0, 00:27:17.202 "io_timeout": 0, 00:27:17.202 "avg_latency_us": 21990.861758747567, 00:27:17.203 "min_latency_us": 1839.4763636363637, 00:27:17.203 "max_latency_us": 3035150.8945454545 00:27:17.203 } 00:27:17.203 ], 00:27:17.203 "core_count": 1 00:27:17.203 } 00:27:17.203 09:04:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=88480 00:27:17.203 09:04:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:27:17.203 09:04:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:17.203 Running I/O for 10 seconds... 00:27:18.137 09:04:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:18.137 6548.00 IOPS, 25.58 MiB/s [2024-09-28 09:04:56.081244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.137 [2024-09-28 09:04:56.081317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.081985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.081999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.082011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.082025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.082038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.082067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.082080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.082094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.082107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.082148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.082162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.082176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.082189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.082204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.082217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.082232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.082244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.082259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.082272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.082287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.137 [2024-09-28 09:04:56.082300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.137 [2024-09-28 09:04:56.082315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.082967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.082982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.083011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.083027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.083041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.083056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.083070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.083085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.083099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.083114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.083128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.083143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.083156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.083172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.083185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.083200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.083214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.083229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.083243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.083258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.083272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.083287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.083300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.083330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.138 [2024-09-28 09:04:56.083343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.138 [2024-09-28 09:04:56.083358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.083982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.083996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.139 [2024-09-28 09:04:56.084482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.139 [2024-09-28 09:04:56.084496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.084527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.140 [2024-09-28 09:04:56.084541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.084557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.140 [2024-09-28 09:04:56.084571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.084599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.140 [2024-09-28 09:04:56.084612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.084628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.140 [2024-09-28 09:04:56.084648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.084665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.140 [2024-09-28 09:04:56.084678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.084694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.140 [2024-09-28 09:04:56.084708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.084723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.140 [2024-09-28 09:04:56.084738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.084754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.140 [2024-09-28 09:04:56.084767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.084809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.140 [2024-09-28 09:04:56.084840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.084858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.140 [2024-09-28 09:04:56.084872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.084888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.140 [2024-09-28 09:04:56.084902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.084918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.084932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.084948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.084962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.084979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.084993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.085027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.085058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.085089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.085127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.085157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.085187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.085217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.085250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.085280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.085311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.085341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.140 [2024-09-28 09:04:56.085370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.140 [2024-09-28 09:04:56.085402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bc80 is same with the state(6) to be set 00:27:18.140 [2024-09-28 09:04:56.085445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:18.140 [2024-09-28 09:04:56.085458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:18.140 [2024-09-28 09:04:56.085472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59544 len:8 PRP1 0x0 PRP2 0x0 00:27:18.140 [2024-09-28 09:04:56.085486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085768] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002bc80 was disconnected and freed. reset controller. 00:27:18.140 [2024-09-28 09:04:56.085885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.140 [2024-09-28 09:04:56.085909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.140 [2024-09-28 09:04:56.085954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.140 [2024-09-28 09:04:56.085967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.141 [2024-09-28 09:04:56.085980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.141 [2024-09-28 09:04:56.085996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:18.141 [2024-09-28 09:04:56.086010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.141 [2024-09-28 09:04:56.086023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:18.141 [2024-09-28 09:04:56.086309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:18.141 [2024-09-28 09:04:56.086345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:18.141 [2024-09-28 09:04:56.086461] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:18.141 [2024-09-28 09:04:56.086491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:27:18.141 [2024-09-28 09:04:56.086508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:18.141 [2024-09-28 09:04:56.086536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:18.141 [2024-09-28 09:04:56.086563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:18.141 [2024-09-28 09:04:56.086592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:18.141 [2024-09-28 09:04:56.086606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:18.141 [2024-09-28 09:04:56.086651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:18.141 [2024-09-28 09:04:56.086667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:18.141 09:04:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:27:19.335 3658.00 IOPS, 14.29 MiB/s [2024-09-28 09:04:57.086820] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:19.335 [2024-09-28 09:04:57.086909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:27:19.335 [2024-09-28 09:04:57.086930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:19.335 [2024-09-28 09:04:57.086962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:19.335 [2024-09-28 09:04:57.086987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:19.335 [2024-09-28 09:04:57.087001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:19.335 [2024-09-28 09:04:57.087015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:19.335 [2024-09-28 09:04:57.087050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:19.335 [2024-09-28 09:04:57.087067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:20.270 2438.67 IOPS, 9.53 MiB/s [2024-09-28 09:04:58.087185] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.270 [2024-09-28 09:04:58.087270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:27:20.270 [2024-09-28 09:04:58.087289] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:20.270 [2024-09-28 09:04:58.087319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:20.270 [2024-09-28 09:04:58.087345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:20.270 [2024-09-28 09:04:58.087359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:20.270 [2024-09-28 09:04:58.087372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:20.270 [2024-09-28 09:04:58.087405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:20.270 [2024-09-28 09:04:58.087420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:21.207 1829.00 IOPS, 7.14 MiB/s [2024-09-28 09:04:59.090661] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.207 [2024-09-28 09:04:59.090763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:27:21.207 [2024-09-28 09:04:59.090784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:27:21.207 [2024-09-28 09:04:59.091092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:27:21.207 [2024-09-28 09:04:59.091358] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:21.207 [2024-09-28 09:04:59.091377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:21.207 [2024-09-28 09:04:59.091392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:21.207 [2024-09-28 09:04:59.095171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:21.207 [2024-09-28 09:04:59.095238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:21.207 09:04:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:21.467 [2024-09-28 09:04:59.352613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:21.467 09:04:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 88480 00:27:22.294 1463.20 IOPS, 5.72 MiB/s [2024-09-28 09:05:00.132636] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:27.042 2424.83 IOPS, 9.47 MiB/s 3318.43 IOPS, 12.96 MiB/s 3985.62 IOPS, 15.57 MiB/s 4506.78 IOPS, 17.60 MiB/s 4923.70 IOPS, 19.23 MiB/s 00:27:27.042 Latency(us) 00:27:27.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.042 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:27.042 Verification LBA range: start 0x0 length 0x4000 00:27:27.042 NVMe0n1 : 10.01 4926.71 19.24 3825.50 0.00 14596.47 655.36 3019898.88 00:27:27.042 =================================================================================================================== 00:27:27.042 Total : 4926.71 19.24 3825.50 0.00 14596.47 0.00 3019898.88 00:27:27.042 { 00:27:27.042 "results": [ 00:27:27.042 { 00:27:27.042 "job": "NVMe0n1", 00:27:27.042 "core_mask": "0x4", 00:27:27.042 "workload": "verify", 00:27:27.042 "status": "finished", 00:27:27.042 "verify_range": { 00:27:27.042 "start": 0, 00:27:27.042 "length": 16384 00:27:27.042 }, 00:27:27.042 "queue_depth": 128, 00:27:27.042 "io_size": 4096, 00:27:27.042 "runtime": 10.009928, 00:27:27.042 "iops": 4926.708763539558, 00:27:27.042 "mibps": 19.2449561075764, 00:27:27.042 "io_failed": 38293, 00:27:27.042 "io_timeout": 0, 00:27:27.042 "avg_latency_us": 14596.474566934281, 00:27:27.042 "min_latency_us": 655.36, 00:27:27.042 "max_latency_us": 3019898.88 00:27:27.042 } 00:27:27.042 ], 00:27:27.042 "core_count": 1 00:27:27.042 } 00:27:27.042 09:05:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 88356 00:27:27.042 09:05:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 88356 ']' 00:27:27.042 09:05:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 88356 00:27:27.043 09:05:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:27:27.043 09:05:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:27.043 09:05:04 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88356 00:27:27.043 killing process with pid 88356 00:27:27.043 Received shutdown signal, test time was about 10.000000 seconds 00:27:27.043 00:27:27.043 Latency(us) 00:27:27.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.043 =================================================================================================================== 00:27:27.043 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:27.043 09:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:27.043 09:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:27.043 09:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88356' 00:27:27.043 09:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 88356 00:27:27.043 09:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 88356 00:27:27.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:27.984 09:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=88600 00:27:27.984 09:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:27:27.984 09:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 88600 /var/tmp/bdevperf.sock 00:27:27.984 09:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 88600 ']' 00:27:27.984 09:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:27.984 09:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:27.984 09:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:27.984 09:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:27.984 09:05:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:28.243 [2024-09-28 09:05:06.025297] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:27:28.243 [2024-09-28 09:05:06.025474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88600 ] 00:27:28.243 [2024-09-28 09:05:06.192717] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.502 [2024-09-28 09:05:06.350861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.761 [2024-09-28 09:05:06.503534] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:29.020 09:05:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:29.020 09:05:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:27:29.020 09:05:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88600 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:27:29.020 09:05:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=88612 00:27:29.020 09:05:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:27:29.279 09:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:27:29.847 NVMe0n1 00:27:29.847 09:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=88653 00:27:29.847 09:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:29.847 09:05:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:27:29.847 Running I/O for 10 seconds... 00:27:30.784 09:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:31.046 13716.00 IOPS, 53.58 MiB/s [2024-09-28 09:05:08.793391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.046 [2024-09-28 09:05:08.793465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.046 [2024-09-28 09:05:08.793519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.046 [2024-09-28 09:05:08.793536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.046 [2024-09-28 09:05:08.793555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.046 [2024-09-28 09:05:08.793569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.046 [2024-09-28 09:05:08.793589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.046 [2024-09-28 09:05:08.793603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.046 [2024-09-28 09:05:08.793620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.046 [2024-09-28 09:05:08.793633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.046 [2024-09-28 09:05:08.793650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.046 [2024-09-28 09:05:08.793663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.046 [2024-09-28 09:05:08.793680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.046 [2024-09-28 09:05:08.793709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.046 [2024-09-28 09:05:08.793742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:129192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.046 [2024-09-28 09:05:08.793756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.046 [2024-09-28 09:05:08.793774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.046 [2024-09-28 09:05:08.793788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.793805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.793819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.793837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.793865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.793888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.793903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.793921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.793935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.793955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.793969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.793992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:53000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.047 [2024-09-28 09:05:08.794766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.047 [2024-09-28 09:05:08.794780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.794797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.794825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.794847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.794862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.794879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.794894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.794911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.794925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.794945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.794959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.794976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.794990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:125784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.048 [2024-09-28 09:05:08.795970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.048 [2024-09-28 09:05:08.795988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:62448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.796970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.796985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.797004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.797019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.797038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.797053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.797072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.797088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.797109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.797124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.049 [2024-09-28 09:05:08.797145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.049 [2024-09-28 09:05:08.797175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:112328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:31.050 [2024-09-28 09:05:08.797848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.797868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:27:31.050 [2024-09-28 09:05:08.797901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:31.050 [2024-09-28 09:05:08.797923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:31.050 [2024-09-28 09:05:08.797938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121904 len:8 PRP1 0x0 PRP2 0x0 00:27:31.050 [2024-09-28 09:05:08.797955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.798197] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61500002b280 was disconnected and freed. reset controller. 00:27:31.050 [2024-09-28 09:05:08.798318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.050 [2024-09-28 09:05:08.798348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.798367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.050 [2024-09-28 09:05:08.798382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.798397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.050 [2024-09-28 09:05:08.798413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.798428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.050 [2024-09-28 09:05:08.798443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.050 [2024-09-28 09:05:08.798463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:27:31.050 [2024-09-28 09:05:08.798795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.050 [2024-09-28 09:05:08.798869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:31.050 [2024-09-28 09:05:08.799057] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.050 [2024-09-28 09:05:08.799096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:27:31.050 [2024-09-28 09:05:08.799115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:27:31.050 [2024-09-28 09:05:08.799164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:31.050 [2024-09-28 09:05:08.799209] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:31.050 [2024-09-28 09:05:08.799237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:31.050 [2024-09-28 09:05:08.799253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:31.050 [2024-09-28 09:05:08.799291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:31.050 [2024-09-28 09:05:08.799310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:31.050 09:05:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 88653 00:27:32.926 7429.50 IOPS, 29.02 MiB/s 4953.00 IOPS, 19.35 MiB/s [2024-09-28 09:05:10.799488] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.926 [2024-09-28 09:05:10.799569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:27:32.926 [2024-09-28 09:05:10.799593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:27:32.926 [2024-09-28 09:05:10.799629] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:32.926 [2024-09-28 09:05:10.799658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:32.926 [2024-09-28 09:05:10.799675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:32.926 [2024-09-28 09:05:10.799690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:32.926 [2024-09-28 09:05:10.799733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:32.926 [2024-09-28 09:05:10.799767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:35.059 3714.75 IOPS, 14.51 MiB/s 2971.80 IOPS, 11.61 MiB/s [2024-09-28 09:05:12.799985] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:35.059 [2024-09-28 09:05:12.800261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:27:35.059 [2024-09-28 09:05:12.800425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:27:35.059 [2024-09-28 09:05:12.800474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:35.059 [2024-09-28 09:05:12.800524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:35.059 [2024-09-28 09:05:12.800557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:35.059 [2024-09-28 09:05:12.800573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:35.059 [2024-09-28 09:05:12.800616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:35.059 [2024-09-28 09:05:12.800635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.933 2476.50 IOPS, 9.67 MiB/s 2122.71 IOPS, 8.29 MiB/s [2024-09-28 09:05:14.800733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.933 [2024-09-28 09:05:14.800802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:36.933 [2024-09-28 09:05:14.800858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:36.933 [2024-09-28 09:05:14.800878] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:36.933 [2024-09-28 09:05:14.800925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.870 1857.38 IOPS, 7.26 MiB/s 00:27:37.870 Latency(us) 00:27:37.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.870 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:27:37.870 NVMe0n1 : 8.10 1834.76 7.17 15.81 0.00 69095.04 8817.57 7046430.72 00:27:37.870 =================================================================================================================== 00:27:37.870 Total : 1834.76 7.17 15.81 0.00 69095.04 8817.57 7046430.72 00:27:37.870 { 00:27:37.870 "results": [ 00:27:37.870 { 00:27:37.870 "job": "NVMe0n1", 00:27:37.870 "core_mask": "0x4", 00:27:37.870 "workload": "randread", 00:27:37.870 "status": "finished", 00:27:37.870 "queue_depth": 128, 00:27:37.870 "io_size": 4096, 00:27:37.870 "runtime": 8.098594, 00:27:37.870 "iops": 1834.7629230456546, 00:27:37.870 "mibps": 7.167042668147088, 00:27:37.870 "io_failed": 128, 00:27:37.870 "io_timeout": 0, 00:27:37.871 "avg_latency_us": 69095.03807057026, 00:27:37.871 "min_latency_us": 8817.57090909091, 00:27:37.871 "max_latency_us": 7046430.72 00:27:37.871 } 00:27:37.871 ], 00:27:37.871 "core_count": 1 00:27:37.871 } 00:27:37.871 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:37.871 Attaching 5 probes... 00:27:37.871 1315.909234: reset bdev controller NVMe0 00:27:37.871 1316.091017: reconnect bdev controller NVMe0 00:27:37.871 3316.495169: reconnect delay bdev controller NVMe0 00:27:37.871 3316.531990: reconnect bdev controller NVMe0 00:27:37.871 5316.990441: reconnect delay bdev controller NVMe0 00:27:37.871 5317.024424: reconnect bdev controller NVMe0 00:27:37.871 7317.832005: reconnect delay bdev controller NVMe0 00:27:37.871 7317.865492: reconnect bdev controller NVMe0 00:27:37.871 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:27:37.871 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:27:37.871 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 88612 00:27:37.871 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:37.871 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 88600 00:27:37.871 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 88600 ']' 00:27:37.871 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 88600 00:27:37.871 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:27:37.871 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:37.871 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88600 00:27:38.129 killing process with pid 88600 00:27:38.129 Received shutdown signal, test time was about 8.165783 seconds 00:27:38.129 00:27:38.129 Latency(us) 00:27:38.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:38.129 =================================================================================================================== 00:27:38.129 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:38.129 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:27:38.129 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:27:38.129 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88600' 00:27:38.129 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 88600 00:27:38.129 09:05:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 88600 00:27:39.065 09:05:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:39.325 rmmod nvme_tcp 00:27:39.325 rmmod nvme_fabrics 00:27:39.325 rmmod nvme_keyring 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@513 -- # '[' -n 88145 ']' 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # killprocess 88145 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 88145 ']' 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 88145 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88145 00:27:39.325 killing process with pid 88145 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88145' 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 88145 00:27:39.325 09:05:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 88145 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-save 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:27:40.705 ************************************ 00:27:40.705 END TEST nvmf_timeout 00:27:40.705 ************************************ 00:27:40.705 00:27:40.705 real 0m50.499s 00:27:40.705 user 2m26.164s 00:27:40.705 sys 0m5.760s 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:40.705 ************************************ 00:27:40.705 END TEST nvmf_host 00:27:40.705 ************************************ 00:27:40.705 00:27:40.705 real 6m26.002s 00:27:40.705 user 17m47.145s 00:27:40.705 sys 1m18.735s 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:40.705 09:05:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.705 09:05:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:40.705 09:05:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:27:40.705 ************************************ 00:27:40.705 END TEST nvmf_tcp 00:27:40.705 ************************************ 00:27:40.705 00:27:40.705 real 17m4.847s 00:27:40.705 user 44m19.581s 00:27:40.705 sys 4m7.842s 00:27:40.705 09:05:18 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:40.705 09:05:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:40.705 09:05:18 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:27:40.705 09:05:18 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:40.705 09:05:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:40.705 09:05:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:40.705 09:05:18 -- common/autotest_common.sh@10 -- # set +x 00:27:40.705 ************************************ 00:27:40.705 START TEST nvmf_dif 00:27:40.705 ************************************ 00:27:40.705 09:05:18 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:40.965 * Looking for test storage... 00:27:40.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:40.965 09:05:18 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:27:40.965 09:05:18 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:27:40.965 09:05:18 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:27:40.965 09:05:18 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:27:40.965 09:05:18 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:40.965 09:05:18 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:27:40.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.965 --rc genhtml_branch_coverage=1 00:27:40.965 --rc genhtml_function_coverage=1 00:27:40.965 --rc genhtml_legend=1 00:27:40.965 --rc geninfo_all_blocks=1 00:27:40.965 --rc geninfo_unexecuted_blocks=1 00:27:40.965 00:27:40.965 ' 00:27:40.965 09:05:18 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:27:40.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.965 --rc genhtml_branch_coverage=1 00:27:40.965 --rc genhtml_function_coverage=1 00:27:40.965 --rc genhtml_legend=1 00:27:40.965 --rc geninfo_all_blocks=1 00:27:40.965 --rc geninfo_unexecuted_blocks=1 00:27:40.965 00:27:40.965 ' 00:27:40.965 09:05:18 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:27:40.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.965 --rc genhtml_branch_coverage=1 00:27:40.965 --rc genhtml_function_coverage=1 00:27:40.965 --rc genhtml_legend=1 00:27:40.965 --rc geninfo_all_blocks=1 00:27:40.965 --rc geninfo_unexecuted_blocks=1 00:27:40.965 00:27:40.965 ' 00:27:40.965 09:05:18 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:27:40.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:40.965 --rc genhtml_branch_coverage=1 00:27:40.965 --rc genhtml_function_coverage=1 00:27:40.965 --rc genhtml_legend=1 00:27:40.965 --rc geninfo_all_blocks=1 00:27:40.965 --rc geninfo_unexecuted_blocks=1 00:27:40.965 00:27:40.965 ' 00:27:40.965 09:05:18 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:40.965 09:05:18 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.965 09:05:18 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.965 09:05:18 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.965 09:05:18 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.965 09:05:18 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.965 09:05:18 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:40.966 09:05:18 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:40.966 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:40.966 09:05:18 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:40.966 09:05:18 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:40.966 09:05:18 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:40.966 09:05:18 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:40.966 09:05:18 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.966 09:05:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:40.966 09:05:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@456 -- # nvmf_veth_init 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:40.966 Cannot find device "nvmf_init_br" 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@162 -- # true 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:40.966 Cannot find device "nvmf_init_br2" 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@163 -- # true 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:40.966 Cannot find device "nvmf_tgt_br" 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@164 -- # true 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:40.966 Cannot find device "nvmf_tgt_br2" 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@165 -- # true 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:40.966 Cannot find device "nvmf_init_br" 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@166 -- # true 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:40.966 Cannot find device "nvmf_init_br2" 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@167 -- # true 00:27:40.966 09:05:18 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:41.225 Cannot find device "nvmf_tgt_br" 00:27:41.225 09:05:18 nvmf_dif -- nvmf/common.sh@168 -- # true 00:27:41.225 09:05:18 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:41.225 Cannot find device "nvmf_tgt_br2" 00:27:41.225 09:05:18 nvmf_dif -- nvmf/common.sh@169 -- # true 00:27:41.225 09:05:18 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:41.225 Cannot find device "nvmf_br" 00:27:41.225 09:05:18 nvmf_dif -- nvmf/common.sh@170 -- # true 00:27:41.225 09:05:18 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:41.225 Cannot find device "nvmf_init_if" 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@171 -- # true 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:41.225 Cannot find device "nvmf_init_if2" 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@172 -- # true 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:41.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@173 -- # true 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:41.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@174 -- # true 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:41.225 09:05:19 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:41.484 09:05:19 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:41.484 09:05:19 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:41.484 09:05:19 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:41.484 09:05:19 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:41.484 09:05:19 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:41.484 09:05:19 nvmf_dif -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:41.484 09:05:19 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:41.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:41.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:27:41.484 00:27:41.484 --- 10.0.0.3 ping statistics --- 00:27:41.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.484 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:27:41.484 09:05:19 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:41.484 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:41.484 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:27:41.484 00:27:41.484 --- 10.0.0.4 ping statistics --- 00:27:41.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.484 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:27:41.484 09:05:19 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:41.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:27:41.484 00:27:41.484 --- 10.0.0.1 ping statistics --- 00:27:41.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.485 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:27:41.485 09:05:19 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:41.485 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.485 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:27:41.485 00:27:41.485 --- 10.0.0.2 ping statistics --- 00:27:41.485 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.485 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:27:41.485 09:05:19 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.485 09:05:19 nvmf_dif -- nvmf/common.sh@457 -- # return 0 00:27:41.485 09:05:19 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:27:41.485 09:05:19 nvmf_dif -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:41.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:41.744 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:41.744 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:41.744 09:05:19 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.744 09:05:19 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:27:41.744 09:05:19 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:27:41.744 09:05:19 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.744 09:05:19 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:27:41.744 09:05:19 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:27:41.744 09:05:19 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:41.744 09:05:19 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:41.744 09:05:19 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:27:41.744 09:05:19 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:41.744 09:05:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:41.744 09:05:19 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=89167 00:27:41.744 09:05:19 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:41.744 09:05:19 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 89167 00:27:41.744 09:05:19 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 89167 ']' 00:27:41.744 09:05:19 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.744 09:05:19 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:41.744 09:05:19 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.744 09:05:19 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:41.744 09:05:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:42.003 [2024-09-28 09:05:19.831752] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:27:42.003 [2024-09-28 09:05:19.831999] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.267 [2024-09-28 09:05:20.010720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.267 [2024-09-28 09:05:20.237083] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.267 [2024-09-28 09:05:20.237194] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.267 [2024-09-28 09:05:20.237231] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.267 [2024-09-28 09:05:20.237248] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.267 [2024-09-28 09:05:20.237260] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.267 [2024-09-28 09:05:20.237298] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.525 [2024-09-28 09:05:20.390719] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:42.783 09:05:20 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:42.783 09:05:20 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:27:42.783 09:05:20 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:27:42.783 09:05:20 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:42.783 09:05:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:43.042 09:05:20 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:43.042 09:05:20 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:43.042 09:05:20 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:43.042 09:05:20 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.042 09:05:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:43.042 [2024-09-28 09:05:20.804936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.042 09:05:20 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.042 09:05:20 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:43.042 09:05:20 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:43.042 09:05:20 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:43.042 09:05:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:43.042 ************************************ 00:27:43.042 START TEST fio_dif_1_default 00:27:43.042 ************************************ 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:43.042 bdev_null0 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.042 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:43.043 [2024-09-28 09:05:20.849197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:43.043 { 00:27:43.043 "params": { 00:27:43.043 "name": "Nvme$subsystem", 00:27:43.043 "trtype": "$TEST_TRANSPORT", 00:27:43.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.043 "adrfam": "ipv4", 00:27:43.043 "trsvcid": "$NVMF_PORT", 00:27:43.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.043 "hdgst": ${hdgst:-false}, 00:27:43.043 "ddgst": ${ddgst:-false} 00:27:43.043 }, 00:27:43.043 "method": "bdev_nvme_attach_controller" 00:27:43.043 } 00:27:43.043 EOF 00:27:43.043 )") 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:27:43.043 "params": { 00:27:43.043 "name": "Nvme0", 00:27:43.043 "trtype": "tcp", 00:27:43.043 "traddr": "10.0.0.3", 00:27:43.043 "adrfam": "ipv4", 00:27:43.043 "trsvcid": "4420", 00:27:43.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:43.043 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:43.043 "hdgst": false, 00:27:43.043 "ddgst": false 00:27:43.043 }, 00:27:43.043 "method": "bdev_nvme_attach_controller" 00:27:43.043 }' 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # break 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:43.043 09:05:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.302 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:43.302 fio-3.35 00:27:43.302 Starting 1 thread 00:27:55.539 00:27:55.539 filename0: (groupid=0, jobs=1): err= 0: pid=89230: Sat Sep 28 09:05:31 2024 00:27:55.539 read: IOPS=7944, BW=31.0MiB/s (32.5MB/s)(310MiB/10001msec) 00:27:55.539 slat (nsec): min=7059, max=80399, avg=9877.07, stdev=4486.05 00:27:55.539 clat (usec): min=397, max=3529, avg=473.55, stdev=49.38 00:27:55.539 lat (usec): min=404, max=3543, avg=483.43, stdev=50.45 00:27:55.539 clat percentiles (usec): 00:27:55.539 | 1.00th=[ 404], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 437], 00:27:55.539 | 30.00th=[ 449], 40.00th=[ 457], 50.00th=[ 465], 60.00th=[ 478], 00:27:55.539 | 70.00th=[ 490], 80.00th=[ 502], 90.00th=[ 529], 95.00th=[ 562], 00:27:55.539 | 99.00th=[ 619], 99.50th=[ 635], 99.90th=[ 725], 99.95th=[ 750], 00:27:55.539 | 99.99th=[ 1123] 00:27:55.539 bw ( KiB/s): min=30272, max=32896, per=100.00%, avg=31848.42, stdev=731.49, samples=19 00:27:55.539 iops : min= 7568, max= 8224, avg=7962.11, stdev=182.87, samples=19 00:27:55.539 lat (usec) : 500=78.59%, 750=21.37%, 1000=0.04% 00:27:55.539 lat (msec) : 2=0.01%, 4=0.01% 00:27:55.539 cpu : usr=86.41%, sys=11.74%, ctx=68, majf=0, minf=1060 00:27:55.539 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:55.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.539 issued rwts: total=79448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.539 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:55.539 00:27:55.539 Run status group 0 (all jobs): 00:27:55.539 READ: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=310MiB (325MB), run=10001-10001msec 00:27:55.539 ----------------------------------------------------- 00:27:55.539 Suppressions used: 00:27:55.539 count bytes template 00:27:55.539 1 8 /usr/src/fio/parse.c 00:27:55.539 1 8 libtcmalloc_minimal.so 00:27:55.539 1 904 libcrypto.so 00:27:55.539 ----------------------------------------------------- 00:27:55.539 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.539 ************************************ 00:27:55.539 END TEST fio_dif_1_default 00:27:55.539 ************************************ 00:27:55.539 00:27:55.539 real 0m12.265s 00:27:55.539 user 0m10.487s 00:27:55.539 sys 0m1.536s 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:55.539 09:05:33 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:55.539 09:05:33 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:55.539 09:05:33 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:55.539 09:05:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:55.539 ************************************ 00:27:55.539 START TEST fio_dif_1_multi_subsystems 00:27:55.539 ************************************ 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:55.539 bdev_null0 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:55.539 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:55.540 [2024-09-28 09:05:33.172705] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:55.540 bdev_null1 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:55.540 { 00:27:55.540 "params": { 00:27:55.540 "name": "Nvme$subsystem", 00:27:55.540 "trtype": "$TEST_TRANSPORT", 00:27:55.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.540 "adrfam": "ipv4", 00:27:55.540 "trsvcid": "$NVMF_PORT", 00:27:55.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.540 "hdgst": ${hdgst:-false}, 00:27:55.540 "ddgst": ${ddgst:-false} 00:27:55.540 }, 00:27:55.540 "method": "bdev_nvme_attach_controller" 00:27:55.540 } 00:27:55.540 EOF 00:27:55.540 )") 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:27:55.540 { 00:27:55.540 "params": { 00:27:55.540 "name": "Nvme$subsystem", 00:27:55.540 "trtype": "$TEST_TRANSPORT", 00:27:55.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.540 "adrfam": "ipv4", 00:27:55.540 "trsvcid": "$NVMF_PORT", 00:27:55.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.540 "hdgst": ${hdgst:-false}, 00:27:55.540 "ddgst": ${ddgst:-false} 00:27:55.540 }, 00:27:55.540 "method": "bdev_nvme_attach_controller" 00:27:55.540 } 00:27:55.540 EOF 00:27:55.540 )") 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:27:55.540 "params": { 00:27:55.540 "name": "Nvme0", 00:27:55.540 "trtype": "tcp", 00:27:55.540 "traddr": "10.0.0.3", 00:27:55.540 "adrfam": "ipv4", 00:27:55.540 "trsvcid": "4420", 00:27:55.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:55.540 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:55.540 "hdgst": false, 00:27:55.540 "ddgst": false 00:27:55.540 }, 00:27:55.540 "method": "bdev_nvme_attach_controller" 00:27:55.540 },{ 00:27:55.540 "params": { 00:27:55.540 "name": "Nvme1", 00:27:55.540 "trtype": "tcp", 00:27:55.540 "traddr": "10.0.0.3", 00:27:55.540 "adrfam": "ipv4", 00:27:55.540 "trsvcid": "4420", 00:27:55.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.540 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:55.540 "hdgst": false, 00:27:55.540 "ddgst": false 00:27:55.540 }, 00:27:55.540 "method": "bdev_nvme_attach_controller" 00:27:55.540 }' 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # break 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:55.540 09:05:33 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.540 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:55.540 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:55.540 fio-3.35 00:27:55.540 Starting 2 threads 00:28:07.748 00:28:07.748 filename0: (groupid=0, jobs=1): err= 0: pid=89392: Sat Sep 28 09:05:44 2024 00:28:07.748 read: IOPS=4367, BW=17.1MiB/s (17.9MB/s)(171MiB/10001msec) 00:28:07.748 slat (usec): min=7, max=415, avg=14.25, stdev= 5.64 00:28:07.748 clat (usec): min=619, max=1611, avg=875.51, stdev=71.73 00:28:07.748 lat (usec): min=628, max=1632, avg=889.76, stdev=73.45 00:28:07.748 clat percentiles (usec): 00:28:07.748 | 1.00th=[ 742], 5.00th=[ 766], 10.00th=[ 791], 20.00th=[ 816], 00:28:07.748 | 30.00th=[ 840], 40.00th=[ 857], 50.00th=[ 873], 60.00th=[ 889], 00:28:07.748 | 70.00th=[ 906], 80.00th=[ 922], 90.00th=[ 963], 95.00th=[ 996], 00:28:07.748 | 99.00th=[ 1090], 99.50th=[ 1139], 99.90th=[ 1287], 99.95th=[ 1319], 00:28:07.748 | 99.99th=[ 1549] 00:28:07.748 bw ( KiB/s): min=16768, max=17888, per=50.00%, avg=17471.84, stdev=250.90, samples=19 00:28:07.748 iops : min= 4192, max= 4472, avg=4367.95, stdev=62.73, samples=19 00:28:07.748 lat (usec) : 750=2.24%, 1000=93.15% 00:28:07.748 lat (msec) : 2=4.61% 00:28:07.748 cpu : usr=89.71%, sys=8.57%, ctx=102, majf=0, minf=1062 00:28:07.748 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:07.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.748 issued rwts: total=43676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.748 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:07.748 filename1: (groupid=0, jobs=1): err= 0: pid=89393: Sat Sep 28 09:05:44 2024 00:28:07.748 read: IOPS=4368, BW=17.1MiB/s (17.9MB/s)(171MiB/10001msec) 00:28:07.748 slat (nsec): min=7565, max=76296, avg=13979.52, stdev=4879.11 00:28:07.748 clat (usec): min=440, max=1629, avg=876.22, stdev=55.84 00:28:07.748 lat (usec): min=448, max=1664, avg=890.20, stdev=56.61 00:28:07.748 clat percentiles (usec): 00:28:07.748 | 1.00th=[ 783], 5.00th=[ 807], 10.00th=[ 816], 20.00th=[ 832], 00:28:07.748 | 30.00th=[ 848], 40.00th=[ 857], 50.00th=[ 865], 60.00th=[ 881], 00:28:07.748 | 70.00th=[ 898], 80.00th=[ 914], 90.00th=[ 947], 95.00th=[ 979], 00:28:07.748 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 1139], 99.95th=[ 1172], 00:28:07.748 | 99.99th=[ 1385] 00:28:07.748 bw ( KiB/s): min=16766, max=17920, per=50.01%, avg=17476.79, stdev=260.61, samples=19 00:28:07.748 iops : min= 4191, max= 4480, avg=4369.16, stdev=65.23, samples=19 00:28:07.748 lat (usec) : 500=0.01%, 750=0.05%, 1000=96.64% 00:28:07.748 lat (msec) : 2=3.29% 00:28:07.748 cpu : usr=90.70%, sys=7.97%, ctx=14, majf=0, minf=1062 00:28:07.748 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:07.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.748 issued rwts: total=43692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.748 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:07.748 00:28:07.748 Run status group 0 (all jobs): 00:28:07.748 READ: bw=34.1MiB/s (35.8MB/s), 17.1MiB/s-17.1MiB/s (17.9MB/s-17.9MB/s), io=341MiB (358MB), run=10001-10001msec 00:28:07.748 ----------------------------------------------------- 00:28:07.748 Suppressions used: 00:28:07.748 count bytes template 00:28:07.748 2 16 /usr/src/fio/parse.c 00:28:07.748 1 8 libtcmalloc_minimal.so 00:28:07.748 1 904 libcrypto.so 00:28:07.748 ----------------------------------------------------- 00:28:07.748 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.748 00:28:07.748 real 0m12.296s 00:28:07.748 user 0m19.933s 00:28:07.748 sys 0m2.004s 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:07.748 ************************************ 00:28:07.748 END TEST fio_dif_1_multi_subsystems 00:28:07.748 ************************************ 00:28:07.748 09:05:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:07.748 09:05:45 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:07.748 09:05:45 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:07.748 09:05:45 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:07.748 09:05:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:07.748 ************************************ 00:28:07.748 START TEST fio_dif_rand_params 00:28:07.748 ************************************ 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:07.748 bdev_null0 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:07.748 [2024-09-28 09:05:45.520457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:07.748 09:05:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:07.749 { 00:28:07.749 "params": { 00:28:07.749 "name": "Nvme$subsystem", 00:28:07.749 "trtype": "$TEST_TRANSPORT", 00:28:07.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.749 "adrfam": "ipv4", 00:28:07.749 "trsvcid": "$NVMF_PORT", 00:28:07.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.749 "hdgst": ${hdgst:-false}, 00:28:07.749 "ddgst": ${ddgst:-false} 00:28:07.749 }, 00:28:07.749 "method": "bdev_nvme_attach_controller" 00:28:07.749 } 00:28:07.749 EOF 00:28:07.749 )") 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:07.749 "params": { 00:28:07.749 "name": "Nvme0", 00:28:07.749 "trtype": "tcp", 00:28:07.749 "traddr": "10.0.0.3", 00:28:07.749 "adrfam": "ipv4", 00:28:07.749 "trsvcid": "4420", 00:28:07.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:07.749 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:07.749 "hdgst": false, 00:28:07.749 "ddgst": false 00:28:07.749 }, 00:28:07.749 "method": "bdev_nvme_attach_controller" 00:28:07.749 }' 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:07.749 09:05:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:08.008 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:08.008 ... 00:28:08.008 fio-3.35 00:28:08.008 Starting 3 threads 00:28:14.573 00:28:14.573 filename0: (groupid=0, jobs=1): err= 0: pid=89553: Sat Sep 28 09:05:51 2024 00:28:14.573 read: IOPS=237, BW=29.6MiB/s (31.1MB/s)(149MiB/5009msec) 00:28:14.573 slat (nsec): min=5471, max=50678, avg=17481.99, stdev=5064.03 00:28:14.573 clat (usec): min=12090, max=16723, avg=12605.66, stdev=506.94 00:28:14.573 lat (usec): min=12106, max=16750, avg=12623.15, stdev=507.46 00:28:14.573 clat percentiles (usec): 00:28:14.573 | 1.00th=[12125], 5.00th=[12256], 10.00th=[12256], 20.00th=[12256], 00:28:14.573 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:28:14.573 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13173], 95.00th=[13698], 00:28:14.573 | 99.00th=[14353], 99.50th=[14484], 99.90th=[16712], 99.95th=[16712], 00:28:14.573 | 99.99th=[16712] 00:28:14.573 bw ( KiB/s): min=29892, max=30720, per=33.32%, avg=30330.00, stdev=411.49, samples=10 00:28:14.573 iops : min= 233, max= 240, avg=236.90, stdev= 3.28, samples=10 00:28:14.573 lat (msec) : 20=100.00% 00:28:14.573 cpu : usr=92.33%, sys=7.11%, ctx=8, majf=0, minf=1075 00:28:14.573 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:14.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.573 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:14.573 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:14.573 filename0: (groupid=0, jobs=1): err= 0: pid=89554: Sat Sep 28 09:05:51 2024 00:28:14.573 read: IOPS=237, BW=29.6MiB/s (31.1MB/s)(149MiB/5012msec) 00:28:14.573 slat (nsec): min=5584, max=59268, avg=17256.00, stdev=5763.32 00:28:14.573 clat (usec): min=12075, max=20598, avg=12615.51, stdev=614.20 00:28:14.573 lat (usec): min=12083, max=20622, avg=12632.76, stdev=614.66 00:28:14.573 clat percentiles (usec): 00:28:14.573 | 1.00th=[12125], 5.00th=[12125], 10.00th=[12256], 20.00th=[12256], 00:28:14.573 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:28:14.573 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13173], 95.00th=[13566], 00:28:14.573 | 99.00th=[14222], 99.50th=[14484], 99.90th=[20579], 99.95th=[20579], 00:28:14.573 | 99.99th=[20579] 00:28:14.573 bw ( KiB/s): min=29952, max=30720, per=33.33%, avg=30336.00, stdev=404.77, samples=10 00:28:14.573 iops : min= 234, max= 240, avg=237.00, stdev= 3.16, samples=10 00:28:14.573 lat (msec) : 20=99.75%, 50=0.25% 00:28:14.573 cpu : usr=91.42%, sys=7.96%, ctx=43, majf=0, minf=1076 00:28:14.573 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:14.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.573 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:14.573 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:14.573 filename0: (groupid=0, jobs=1): err= 0: pid=89555: Sat Sep 28 09:05:51 2024 00:28:14.573 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(149MiB/5007msec) 00:28:14.573 slat (nsec): min=5773, max=59440, avg=17808.63, stdev=5792.21 00:28:14.573 clat (usec): min=12095, max=15330, avg=12600.60, stdev=481.69 00:28:14.573 lat (usec): min=12111, max=15351, avg=12618.41, stdev=482.29 00:28:14.573 clat percentiles (usec): 00:28:14.573 | 1.00th=[12125], 5.00th=[12256], 10.00th=[12256], 20.00th=[12256], 00:28:14.573 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:28:14.573 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13173], 95.00th=[13698], 00:28:14.573 | 99.00th=[14353], 99.50th=[14484], 99.90th=[15270], 99.95th=[15270], 00:28:14.573 | 99.99th=[15270] 00:28:14.573 bw ( KiB/s): min=29952, max=30720, per=33.34%, avg=30342.00, stdev=398.85, samples=10 00:28:14.573 iops : min= 234, max= 240, avg=237.00, stdev= 3.16, samples=10 00:28:14.573 lat (msec) : 20=100.00% 00:28:14.573 cpu : usr=91.27%, sys=8.03%, ctx=88, majf=0, minf=1073 00:28:14.573 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:14.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.573 issued rwts: total=1188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:14.573 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:14.573 00:28:14.573 Run status group 0 (all jobs): 00:28:14.573 READ: bw=88.9MiB/s (93.2MB/s), 29.6MiB/s-29.7MiB/s (31.1MB/s-31.1MB/s), io=446MiB (467MB), run=5007-5012msec 00:28:14.573 ----------------------------------------------------- 00:28:14.573 Suppressions used: 00:28:14.573 count bytes template 00:28:14.573 5 44 /usr/src/fio/parse.c 00:28:14.573 1 8 libtcmalloc_minimal.so 00:28:14.573 1 904 libcrypto.so 00:28:14.573 ----------------------------------------------------- 00:28:14.573 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.573 bdev_null0 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.573 [2024-09-28 09:05:52.545586] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.573 bdev_null1 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.573 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:14.574 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.574 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.833 bdev_null2 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:14.833 09:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:14.833 { 00:28:14.833 "params": { 00:28:14.833 "name": "Nvme$subsystem", 00:28:14.833 "trtype": "$TEST_TRANSPORT", 00:28:14.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.833 "adrfam": "ipv4", 00:28:14.833 "trsvcid": "$NVMF_PORT", 00:28:14.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.834 "hdgst": ${hdgst:-false}, 00:28:14.834 "ddgst": ${ddgst:-false} 00:28:14.834 }, 00:28:14.834 "method": "bdev_nvme_attach_controller" 00:28:14.834 } 00:28:14.834 EOF 00:28:14.834 )") 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:14.834 { 00:28:14.834 "params": { 00:28:14.834 "name": "Nvme$subsystem", 00:28:14.834 "trtype": "$TEST_TRANSPORT", 00:28:14.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.834 "adrfam": "ipv4", 00:28:14.834 "trsvcid": "$NVMF_PORT", 00:28:14.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.834 "hdgst": ${hdgst:-false}, 00:28:14.834 "ddgst": ${ddgst:-false} 00:28:14.834 }, 00:28:14.834 "method": "bdev_nvme_attach_controller" 00:28:14.834 } 00:28:14.834 EOF 00:28:14.834 )") 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:14.834 { 00:28:14.834 "params": { 00:28:14.834 "name": "Nvme$subsystem", 00:28:14.834 "trtype": "$TEST_TRANSPORT", 00:28:14.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.834 "adrfam": "ipv4", 00:28:14.834 "trsvcid": "$NVMF_PORT", 00:28:14.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.834 "hdgst": ${hdgst:-false}, 00:28:14.834 "ddgst": ${ddgst:-false} 00:28:14.834 }, 00:28:14.834 "method": "bdev_nvme_attach_controller" 00:28:14.834 } 00:28:14.834 EOF 00:28:14.834 )") 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:14.834 "params": { 00:28:14.834 "name": "Nvme0", 00:28:14.834 "trtype": "tcp", 00:28:14.834 "traddr": "10.0.0.3", 00:28:14.834 "adrfam": "ipv4", 00:28:14.834 "trsvcid": "4420", 00:28:14.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.834 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:14.834 "hdgst": false, 00:28:14.834 "ddgst": false 00:28:14.834 }, 00:28:14.834 "method": "bdev_nvme_attach_controller" 00:28:14.834 },{ 00:28:14.834 "params": { 00:28:14.834 "name": "Nvme1", 00:28:14.834 "trtype": "tcp", 00:28:14.834 "traddr": "10.0.0.3", 00:28:14.834 "adrfam": "ipv4", 00:28:14.834 "trsvcid": "4420", 00:28:14.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:14.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:14.834 "hdgst": false, 00:28:14.834 "ddgst": false 00:28:14.834 }, 00:28:14.834 "method": "bdev_nvme_attach_controller" 00:28:14.834 },{ 00:28:14.834 "params": { 00:28:14.834 "name": "Nvme2", 00:28:14.834 "trtype": "tcp", 00:28:14.834 "traddr": "10.0.0.3", 00:28:14.834 "adrfam": "ipv4", 00:28:14.834 "trsvcid": "4420", 00:28:14.834 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:14.834 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:14.834 "hdgst": false, 00:28:14.834 "ddgst": false 00:28:14.834 }, 00:28:14.834 "method": "bdev_nvme_attach_controller" 00:28:14.834 }' 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:14.834 09:05:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:15.093 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:15.093 ... 00:28:15.093 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:15.093 ... 00:28:15.093 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:15.093 ... 00:28:15.093 fio-3.35 00:28:15.094 Starting 24 threads 00:28:27.294 00:28:27.294 filename0: (groupid=0, jobs=1): err= 0: pid=89649: Sat Sep 28 09:06:03 2024 00:28:27.294 read: IOPS=153, BW=614KiB/s (629kB/s)(6160KiB/10033msec) 00:28:27.294 slat (usec): min=5, max=8035, avg=31.99, stdev=347.50 00:28:27.294 clat (msec): min=50, max=166, avg=103.96, stdev=23.87 00:28:27.294 lat (msec): min=50, max=166, avg=103.99, stdev=23.87 00:28:27.294 clat percentiles (msec): 00:28:27.294 | 1.00th=[ 61], 5.00th=[ 73], 10.00th=[ 82], 20.00th=[ 84], 00:28:27.294 | 30.00th=[ 85], 40.00th=[ 92], 50.00th=[ 96], 60.00th=[ 108], 00:28:27.294 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 142], 95.00th=[ 144], 00:28:27.294 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 167], 00:28:27.294 | 99.99th=[ 167] 00:28:27.294 bw ( KiB/s): min= 504, max= 768, per=4.16%, avg=611.75, stdev=76.50, samples=20 00:28:27.294 iops : min= 126, max= 192, avg=152.90, stdev=19.06, samples=20 00:28:27.294 lat (msec) : 100=57.66%, 250=42.34% 00:28:27.294 cpu : usr=31.52%, sys=1.88%, ctx=877, majf=0, minf=1074 00:28:27.294 IO depths : 1=0.1%, 2=2.7%, 4=10.8%, 8=71.9%, 16=14.4%, 32=0.0%, >=64=0.0% 00:28:27.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.294 complete : 0=0.0%, 4=90.0%, 8=7.6%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.294 issued rwts: total=1540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.294 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.294 filename0: (groupid=0, jobs=1): err= 0: pid=89650: Sat Sep 28 09:06:03 2024 00:28:27.294 read: IOPS=157, BW=630KiB/s (645kB/s)(6364KiB/10100msec) 00:28:27.294 slat (usec): min=5, max=8035, avg=43.01, stdev=415.60 00:28:27.294 clat (msec): min=8, max=179, avg=101.03, stdev=30.91 00:28:27.294 lat (msec): min=8, max=179, avg=101.07, stdev=30.92 00:28:27.294 clat percentiles (msec): 00:28:27.294 | 1.00th=[ 9], 5.00th=[ 48], 10.00th=[ 72], 20.00th=[ 82], 00:28:27.294 | 30.00th=[ 85], 40.00th=[ 88], 50.00th=[ 96], 60.00th=[ 110], 00:28:27.294 | 70.00th=[ 125], 80.00th=[ 132], 90.00th=[ 140], 95.00th=[ 144], 00:28:27.294 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 176], 99.95th=[ 180], 00:28:27.294 | 99.99th=[ 180] 00:28:27.294 bw ( KiB/s): min= 480, max= 1152, per=4.29%, avg=630.00, stdev=160.27, samples=20 00:28:27.294 iops : min= 120, max= 288, avg=157.50, stdev=40.07, samples=20 00:28:27.294 lat (msec) : 10=2.01%, 50=3.71%, 100=50.16%, 250=44.12% 00:28:27.294 cpu : usr=35.97%, sys=2.25%, ctx=1045, majf=0, minf=1073 00:28:27.294 IO depths : 1=0.2%, 2=2.7%, 4=10.2%, 8=72.2%, 16=14.6%, 32=0.0%, >=64=0.0% 00:28:27.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.294 complete : 0=0.0%, 4=90.0%, 8=7.7%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.294 issued rwts: total=1591,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.294 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.294 filename0: (groupid=0, jobs=1): err= 0: pid=89651: Sat Sep 28 09:06:03 2024 00:28:27.294 read: IOPS=146, BW=588KiB/s (602kB/s)(5884KiB/10010msec) 00:28:27.294 slat (usec): min=4, max=8032, avg=22.34, stdev=209.08 00:28:27.294 clat (msec): min=9, max=189, avg=108.71, stdev=30.44 00:28:27.294 lat (msec): min=9, max=189, avg=108.74, stdev=30.44 00:28:27.294 clat percentiles (msec): 00:28:27.294 | 1.00th=[ 15], 5.00th=[ 74], 10.00th=[ 82], 20.00th=[ 84], 00:28:27.294 | 30.00th=[ 85], 40.00th=[ 94], 50.00th=[ 105], 60.00th=[ 122], 00:28:27.294 | 70.00th=[ 132], 80.00th=[ 138], 90.00th=[ 142], 95.00th=[ 148], 00:28:27.294 | 99.00th=[ 180], 99.50th=[ 180], 99.90th=[ 190], 99.95th=[ 190], 00:28:27.294 | 99.99th=[ 190] 00:28:27.294 bw ( KiB/s): min= 400, max= 768, per=3.89%, avg=571.42, stdev=123.26, samples=19 00:28:27.294 iops : min= 100, max= 192, avg=142.79, stdev=30.80, samples=19 00:28:27.294 lat (msec) : 10=0.27%, 20=1.09%, 50=1.02%, 100=47.11%, 250=50.51% 00:28:27.294 cpu : usr=35.89%, sys=2.01%, ctx=1069, majf=0, minf=1073 00:28:27.294 IO depths : 1=0.1%, 2=4.8%, 4=19.2%, 8=62.7%, 16=13.2%, 32=0.0%, >=64=0.0% 00:28:27.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.294 complete : 0=0.0%, 4=92.6%, 8=3.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.294 issued rwts: total=1471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.294 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.294 filename0: (groupid=0, jobs=1): err= 0: pid=89652: Sat Sep 28 09:06:03 2024 00:28:27.294 read: IOPS=153, BW=613KiB/s (628kB/s)(6152KiB/10038msec) 00:28:27.294 slat (usec): min=5, max=4042, avg=20.08, stdev=102.81 00:28:27.294 clat (msec): min=39, max=155, avg=104.17, stdev=23.18 00:28:27.294 lat (msec): min=39, max=155, avg=104.19, stdev=23.18 00:28:27.294 clat percentiles (msec): 00:28:27.294 | 1.00th=[ 57], 5.00th=[ 74], 10.00th=[ 81], 20.00th=[ 84], 00:28:27.294 | 30.00th=[ 88], 40.00th=[ 93], 50.00th=[ 97], 60.00th=[ 109], 00:28:27.294 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 138], 95.00th=[ 142], 00:28:27.294 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 155], 00:28:27.294 | 99.99th=[ 155] 00:28:27.294 bw ( KiB/s): min= 512, max= 768, per=4.14%, avg=608.95, stdev=76.76, samples=19 00:28:27.294 iops : min= 128, max= 192, avg=152.16, stdev=19.21, samples=19 00:28:27.294 lat (msec) : 50=0.59%, 100=52.21%, 250=47.20% 00:28:27.294 cpu : usr=41.93%, sys=2.71%, ctx=1252, majf=0, minf=1072 00:28:27.294 IO depths : 1=0.1%, 2=2.8%, 4=11.1%, 8=71.7%, 16=14.3%, 32=0.0%, >=64=0.0% 00:28:27.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.294 complete : 0=0.0%, 4=90.1%, 8=7.5%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.294 issued rwts: total=1538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.294 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.294 filename0: (groupid=0, jobs=1): err= 0: pid=89653: Sat Sep 28 09:06:03 2024 00:28:27.294 read: IOPS=164, BW=658KiB/s (673kB/s)(6596KiB/10031msec) 00:28:27.294 slat (usec): min=4, max=7362, avg=28.63, stdev=234.66 00:28:27.294 clat (msec): min=31, max=160, avg=97.15, stdev=27.66 00:28:27.294 lat (msec): min=31, max=160, avg=97.18, stdev=27.66 00:28:27.294 clat percentiles (msec): 00:28:27.294 | 1.00th=[ 36], 5.00th=[ 55], 10.00th=[ 63], 20.00th=[ 77], 00:28:27.294 | 30.00th=[ 83], 40.00th=[ 86], 50.00th=[ 92], 60.00th=[ 97], 00:28:27.294 | 70.00th=[ 114], 80.00th=[ 130], 90.00th=[ 136], 95.00th=[ 142], 00:28:27.294 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 155], 99.95th=[ 161], 00:28:27.294 | 99.99th=[ 161] 00:28:27.294 bw ( KiB/s): min= 560, max= 920, per=4.45%, avg=654.00, stdev=116.03, samples=19 00:28:27.294 iops : min= 140, max= 230, avg=163.47, stdev=29.01, samples=19 00:28:27.294 lat (msec) : 50=3.70%, 100=59.31%, 250=36.99% 00:28:27.294 cpu : usr=43.38%, sys=2.55%, ctx=1552, majf=0, minf=1073 00:28:27.294 IO depths : 1=0.1%, 2=1.2%, 4=4.5%, 8=79.1%, 16=15.1%, 32=0.0%, >=64=0.0% 00:28:27.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.294 complete : 0=0.0%, 4=87.9%, 8=11.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.294 issued rwts: total=1649,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.294 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.294 filename0: (groupid=0, jobs=1): err= 0: pid=89654: Sat Sep 28 09:06:03 2024 00:28:27.294 read: IOPS=168, BW=674KiB/s (690kB/s)(6764KiB/10034msec) 00:28:27.294 slat (usec): min=5, max=8035, avg=29.46, stdev=292.34 00:28:27.294 clat (msec): min=32, max=167, avg=94.73, stdev=28.56 00:28:27.294 lat (msec): min=32, max=167, avg=94.76, stdev=28.57 00:28:27.294 clat percentiles (msec): 00:28:27.294 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:28:27.294 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 92], 60.00th=[ 96], 00:28:27.294 | 70.00th=[ 108], 80.00th=[ 129], 90.00th=[ 136], 95.00th=[ 142], 00:28:27.294 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 148], 99.95th=[ 169], 00:28:27.294 | 99.99th=[ 169] 00:28:27.294 bw ( KiB/s): min= 560, max= 944, per=4.60%, avg=676.21, stdev=137.97, samples=19 00:28:27.294 iops : min= 140, max= 236, avg=169.00, stdev=34.50, samples=19 00:28:27.294 lat (msec) : 50=6.80%, 100=58.43%, 250=34.77% 00:28:27.294 cpu : usr=37.11%, sys=2.45%, ctx=1176, majf=0, minf=1074 00:28:27.294 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:28:27.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 issued rwts: total=1691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.295 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.295 filename0: (groupid=0, jobs=1): err= 0: pid=89655: Sat Sep 28 09:06:03 2024 00:28:27.295 read: IOPS=161, BW=645KiB/s (660kB/s)(6480KiB/10051msec) 00:28:27.295 slat (usec): min=5, max=8034, avg=21.66, stdev=199.28 00:28:27.295 clat (msec): min=33, max=165, avg=99.04, stdev=26.83 00:28:27.295 lat (msec): min=33, max=165, avg=99.06, stdev=26.83 00:28:27.295 clat percentiles (msec): 00:28:27.295 | 1.00th=[ 45], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 81], 00:28:27.295 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 95], 60.00th=[ 99], 00:28:27.295 | 70.00th=[ 118], 80.00th=[ 132], 90.00th=[ 140], 95.00th=[ 142], 00:28:27.295 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 167], 00:28:27.295 | 99.99th=[ 167] 00:28:27.295 bw ( KiB/s): min= 512, max= 872, per=4.37%, avg=641.60, stdev=116.79, samples=20 00:28:27.295 iops : min= 128, max= 218, avg=160.40, stdev=29.20, samples=20 00:28:27.295 lat (msec) : 50=2.53%, 100=59.32%, 250=38.15% 00:28:27.295 cpu : usr=31.75%, sys=1.80%, ctx=932, majf=0, minf=1072 00:28:27.295 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=80.1%, 16=15.6%, 32=0.0%, >=64=0.0% 00:28:27.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 complete : 0=0.0%, 4=87.9%, 8=11.4%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 issued rwts: total=1620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.295 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.295 filename0: (groupid=0, jobs=1): err= 0: pid=89656: Sat Sep 28 09:06:03 2024 00:28:27.295 read: IOPS=142, BW=570KiB/s (584kB/s)(5752KiB/10089msec) 00:28:27.295 slat (usec): min=4, max=8035, avg=28.32, stdev=259.23 00:28:27.295 clat (msec): min=8, max=197, avg=111.97, stdev=35.28 00:28:27.295 lat (msec): min=9, max=197, avg=112.00, stdev=35.29 00:28:27.295 clat percentiles (msec): 00:28:27.295 | 1.00th=[ 10], 5.00th=[ 40], 10.00th=[ 77], 20.00th=[ 84], 00:28:27.295 | 30.00th=[ 88], 40.00th=[ 104], 50.00th=[ 120], 60.00th=[ 131], 00:28:27.295 | 70.00th=[ 136], 80.00th=[ 142], 90.00th=[ 144], 95.00th=[ 165], 00:28:27.295 | 99.00th=[ 188], 99.50th=[ 188], 99.90th=[ 199], 99.95th=[ 199], 00:28:27.295 | 99.99th=[ 199] 00:28:27.295 bw ( KiB/s): min= 384, max= 1152, per=3.87%, avg=568.80, stdev=181.50, samples=20 00:28:27.295 iops : min= 96, max= 288, avg=142.20, stdev=45.38, samples=20 00:28:27.295 lat (msec) : 10=2.23%, 50=3.20%, 100=32.68%, 250=61.89% 00:28:27.295 cpu : usr=44.38%, sys=2.87%, ctx=1600, majf=0, minf=1075 00:28:27.295 IO depths : 1=0.1%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:28:27.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 issued rwts: total=1438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.295 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.295 filename1: (groupid=0, jobs=1): err= 0: pid=89658: Sat Sep 28 09:06:03 2024 00:28:27.295 read: IOPS=165, BW=662KiB/s (678kB/s)(6672KiB/10074msec) 00:28:27.295 slat (usec): min=5, max=8043, avg=28.62, stdev=294.62 00:28:27.295 clat (msec): min=30, max=176, avg=96.26, stdev=29.99 00:28:27.295 lat (msec): min=30, max=176, avg=96.29, stdev=29.99 00:28:27.295 clat percentiles (msec): 00:28:27.295 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 72], 00:28:27.295 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 93], 60.00th=[ 96], 00:28:27.295 | 70.00th=[ 120], 80.00th=[ 131], 90.00th=[ 138], 95.00th=[ 142], 00:28:27.295 | 99.00th=[ 150], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 178], 00:28:27.295 | 99.99th=[ 178] 00:28:27.295 bw ( KiB/s): min= 504, max= 984, per=4.51%, avg=662.80, stdev=153.76, samples=20 00:28:27.295 iops : min= 126, max= 246, avg=165.65, stdev=38.47, samples=20 00:28:27.295 lat (msec) : 50=7.19%, 100=55.16%, 250=37.65% 00:28:27.295 cpu : usr=36.75%, sys=2.32%, ctx=1090, majf=0, minf=1075 00:28:27.295 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:27.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 complete : 0=0.0%, 4=87.5%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 issued rwts: total=1668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.295 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.295 filename1: (groupid=0, jobs=1): err= 0: pid=89659: Sat Sep 28 09:06:03 2024 00:28:27.295 read: IOPS=167, BW=669KiB/s (685kB/s)(6716KiB/10040msec) 00:28:27.295 slat (usec): min=5, max=4035, avg=18.87, stdev=98.28 00:28:27.295 clat (msec): min=30, max=167, avg=95.48, stdev=29.12 00:28:27.295 lat (msec): min=30, max=167, avg=95.50, stdev=29.12 00:28:27.295 clat percentiles (msec): 00:28:27.295 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:28:27.295 | 30.00th=[ 81], 40.00th=[ 85], 50.00th=[ 91], 60.00th=[ 96], 00:28:27.295 | 70.00th=[ 109], 80.00th=[ 132], 90.00th=[ 136], 95.00th=[ 144], 00:28:27.295 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 167], 99.95th=[ 169], 00:28:27.295 | 99.99th=[ 169] 00:28:27.295 bw ( KiB/s): min= 512, max= 896, per=4.54%, avg=667.35, stdev=135.51, samples=20 00:28:27.295 iops : min= 128, max= 224, avg=166.80, stdev=33.88, samples=20 00:28:27.295 lat (msec) : 50=6.85%, 100=56.82%, 250=36.33% 00:28:27.295 cpu : usr=35.13%, sys=2.18%, ctx=1050, majf=0, minf=1071 00:28:27.295 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:28:27.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 issued rwts: total=1679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.295 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.295 filename1: (groupid=0, jobs=1): err= 0: pid=89660: Sat Sep 28 09:06:03 2024 00:28:27.295 read: IOPS=144, BW=576KiB/s (590kB/s)(5792KiB/10055msec) 00:28:27.295 slat (usec): min=5, max=8033, avg=30.36, stdev=276.98 00:28:27.295 clat (msec): min=57, max=190, avg=110.72, stdev=27.38 00:28:27.295 lat (msec): min=57, max=190, avg=110.75, stdev=27.39 00:28:27.295 clat percentiles (msec): 00:28:27.295 | 1.00th=[ 64], 5.00th=[ 79], 10.00th=[ 83], 20.00th=[ 85], 00:28:27.295 | 30.00th=[ 89], 40.00th=[ 95], 50.00th=[ 105], 60.00th=[ 120], 00:28:27.295 | 70.00th=[ 130], 80.00th=[ 136], 90.00th=[ 144], 95.00th=[ 161], 00:28:27.295 | 99.00th=[ 184], 99.50th=[ 184], 99.90th=[ 190], 99.95th=[ 190], 00:28:27.295 | 99.99th=[ 190] 00:28:27.295 bw ( KiB/s): min= 384, max= 752, per=3.90%, avg=572.80, stdev=100.74, samples=20 00:28:27.295 iops : min= 96, max= 188, avg=143.20, stdev=25.18, samples=20 00:28:27.295 lat (msec) : 100=45.17%, 250=54.83% 00:28:27.295 cpu : usr=42.75%, sys=2.87%, ctx=1701, majf=0, minf=1075 00:28:27.295 IO depths : 1=0.1%, 2=4.0%, 4=16.0%, 8=66.4%, 16=13.6%, 32=0.0%, >=64=0.0% 00:28:27.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 complete : 0=0.0%, 4=91.5%, 8=5.0%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 issued rwts: total=1448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.295 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.295 filename1: (groupid=0, jobs=1): err= 0: pid=89661: Sat Sep 28 09:06:03 2024 00:28:27.295 read: IOPS=136, BW=547KiB/s (560kB/s)(5504KiB/10062msec) 00:28:27.295 slat (usec): min=5, max=4035, avg=27.93, stdev=211.34 00:28:27.295 clat (msec): min=65, max=199, avg=116.54, stdev=28.53 00:28:27.295 lat (msec): min=65, max=199, avg=116.57, stdev=28.54 00:28:27.295 clat percentiles (msec): 00:28:27.295 | 1.00th=[ 66], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:28:27.295 | 30.00th=[ 88], 40.00th=[ 109], 50.00th=[ 125], 60.00th=[ 132], 00:28:27.295 | 70.00th=[ 134], 80.00th=[ 140], 90.00th=[ 144], 95.00th=[ 167], 00:28:27.295 | 99.00th=[ 188], 99.50th=[ 188], 99.90th=[ 199], 99.95th=[ 199], 00:28:27.295 | 99.99th=[ 199] 00:28:27.295 bw ( KiB/s): min= 384, max= 769, per=3.70%, avg=543.95, stdev=121.82, samples=20 00:28:27.295 iops : min= 96, max= 192, avg=135.90, stdev=30.41, samples=20 00:28:27.295 lat (msec) : 100=35.76%, 250=64.24% 00:28:27.295 cpu : usr=40.38%, sys=2.45%, ctx=1175, majf=0, minf=1074 00:28:27.295 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:27.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.295 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.295 filename1: (groupid=0, jobs=1): err= 0: pid=89662: Sat Sep 28 09:06:03 2024 00:28:27.295 read: IOPS=139, BW=556KiB/s (570kB/s)(5568KiB/10011msec) 00:28:27.295 slat (usec): min=5, max=8038, avg=35.92, stdev=348.57 00:28:27.295 clat (msec): min=12, max=183, avg=114.77, stdev=27.32 00:28:27.295 lat (msec): min=12, max=183, avg=114.80, stdev=27.31 00:28:27.295 clat percentiles (msec): 00:28:27.295 | 1.00th=[ 30], 5.00th=[ 78], 10.00th=[ 82], 20.00th=[ 85], 00:28:27.295 | 30.00th=[ 89], 40.00th=[ 117], 50.00th=[ 124], 60.00th=[ 131], 00:28:27.295 | 70.00th=[ 133], 80.00th=[ 138], 90.00th=[ 142], 95.00th=[ 146], 00:28:27.295 | 99.00th=[ 178], 99.50th=[ 178], 99.90th=[ 184], 99.95th=[ 184], 00:28:27.295 | 99.99th=[ 184] 00:28:27.295 bw ( KiB/s): min= 384, max= 768, per=3.76%, avg=552.42, stdev=130.57, samples=19 00:28:27.295 iops : min= 96, max= 192, avg=138.11, stdev=32.64, samples=19 00:28:27.295 lat (msec) : 20=0.14%, 50=1.01%, 100=34.48%, 250=64.37% 00:28:27.295 cpu : usr=40.31%, sys=2.63%, ctx=1314, majf=0, minf=1073 00:28:27.295 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:27.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.295 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.295 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.295 filename1: (groupid=0, jobs=1): err= 0: pid=89663: Sat Sep 28 09:06:03 2024 00:28:27.295 read: IOPS=140, BW=563KiB/s (576kB/s)(5632KiB/10005msec) 00:28:27.295 slat (usec): min=5, max=8032, avg=26.66, stdev=302.10 00:28:27.295 clat (msec): min=12, max=192, avg=113.45, stdev=27.83 00:28:27.295 lat (msec): min=12, max=192, avg=113.48, stdev=27.82 00:28:27.295 clat percentiles (msec): 00:28:27.295 | 1.00th=[ 32], 5.00th=[ 82], 10.00th=[ 83], 20.00th=[ 85], 00:28:27.295 | 30.00th=[ 86], 40.00th=[ 100], 50.00th=[ 121], 60.00th=[ 130], 00:28:27.295 | 70.00th=[ 132], 80.00th=[ 138], 90.00th=[ 144], 95.00th=[ 144], 00:28:27.295 | 99.00th=[ 180], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 192], 00:28:27.295 | 99.99th=[ 192] 00:28:27.295 bw ( KiB/s): min= 384, max= 768, per=3.81%, avg=559.16, stdev=130.91, samples=19 00:28:27.295 iops : min= 96, max= 192, avg=139.79, stdev=32.73, samples=19 00:28:27.295 lat (msec) : 20=0.14%, 50=0.99%, 100=39.20%, 250=59.66% 00:28:27.296 cpu : usr=31.36%, sys=1.97%, ctx=939, majf=0, minf=1073 00:28:27.296 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:27.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 issued rwts: total=1408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.296 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.296 filename1: (groupid=0, jobs=1): err= 0: pid=89664: Sat Sep 28 09:06:03 2024 00:28:27.296 read: IOPS=164, BW=660KiB/s (676kB/s)(6632KiB/10051msec) 00:28:27.296 slat (usec): min=5, max=8034, avg=42.45, stdev=450.51 00:28:27.296 clat (msec): min=33, max=167, avg=96.71, stdev=27.75 00:28:27.296 lat (msec): min=33, max=167, avg=96.75, stdev=27.75 00:28:27.296 clat percentiles (msec): 00:28:27.296 | 1.00th=[ 45], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 73], 00:28:27.296 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 93], 60.00th=[ 96], 00:28:27.296 | 70.00th=[ 109], 80.00th=[ 132], 90.00th=[ 140], 95.00th=[ 144], 00:28:27.296 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 169], 00:28:27.296 | 99.99th=[ 169] 00:28:27.296 bw ( KiB/s): min= 536, max= 920, per=4.47%, avg=656.80, stdev=127.76, samples=20 00:28:27.296 iops : min= 134, max= 230, avg=164.20, stdev=31.94, samples=20 00:28:27.296 lat (msec) : 50=4.58%, 100=59.29%, 250=36.13% 00:28:27.296 cpu : usr=30.93%, sys=2.23%, ctx=885, majf=0, minf=1075 00:28:27.296 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:28:27.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 issued rwts: total=1658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.296 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.296 filename1: (groupid=0, jobs=1): err= 0: pid=89665: Sat Sep 28 09:06:03 2024 00:28:27.296 read: IOPS=135, BW=543KiB/s (556kB/s)(5448KiB/10027msec) 00:28:27.296 slat (nsec): min=5235, max=49874, avg=15790.12, stdev=6032.75 00:28:27.296 clat (msec): min=31, max=204, avg=117.45, stdev=28.17 00:28:27.296 lat (msec): min=31, max=204, avg=117.46, stdev=28.17 00:28:27.296 clat percentiles (msec): 00:28:27.296 | 1.00th=[ 77], 5.00th=[ 81], 10.00th=[ 81], 20.00th=[ 87], 00:28:27.296 | 30.00th=[ 89], 40.00th=[ 108], 50.00th=[ 124], 60.00th=[ 132], 00:28:27.296 | 70.00th=[ 136], 80.00th=[ 142], 90.00th=[ 146], 95.00th=[ 167], 00:28:27.296 | 99.00th=[ 178], 99.50th=[ 190], 99.90th=[ 205], 99.95th=[ 205], 00:28:27.296 | 99.99th=[ 205] 00:28:27.296 bw ( KiB/s): min= 397, max= 768, per=3.71%, avg=545.42, stdev=128.10, samples=19 00:28:27.296 iops : min= 99, max= 192, avg=136.32, stdev=32.05, samples=19 00:28:27.296 lat (msec) : 50=0.15%, 100=33.33%, 250=66.52% 00:28:27.296 cpu : usr=42.53%, sys=2.35%, ctx=1375, majf=0, minf=1071 00:28:27.296 IO depths : 1=0.1%, 2=6.2%, 4=24.9%, 8=56.3%, 16=12.5%, 32=0.0%, >=64=0.0% 00:28:27.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 complete : 0=0.0%, 4=94.5%, 8=0.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 issued rwts: total=1362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.296 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.296 filename2: (groupid=0, jobs=1): err= 0: pid=89666: Sat Sep 28 09:06:03 2024 00:28:27.296 read: IOPS=136, BW=546KiB/s (559kB/s)(5484KiB/10051msec) 00:28:27.296 slat (usec): min=5, max=559, avg=16.20, stdev=15.82 00:28:27.296 clat (msec): min=59, max=201, avg=117.00, stdev=30.96 00:28:27.296 lat (msec): min=59, max=201, avg=117.01, stdev=30.96 00:28:27.296 clat percentiles (msec): 00:28:27.296 | 1.00th=[ 63], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 85], 00:28:27.296 | 30.00th=[ 94], 40.00th=[ 104], 50.00th=[ 121], 60.00th=[ 132], 00:28:27.296 | 70.00th=[ 132], 80.00th=[ 142], 90.00th=[ 163], 95.00th=[ 180], 00:28:27.296 | 99.00th=[ 192], 99.50th=[ 192], 99.90th=[ 201], 99.95th=[ 201], 00:28:27.296 | 99.99th=[ 201] 00:28:27.296 bw ( KiB/s): min= 384, max= 768, per=3.69%, avg=542.00, stdev=123.44, samples=20 00:28:27.296 iops : min= 96, max= 192, avg=135.50, stdev=30.86, samples=20 00:28:27.296 lat (msec) : 100=36.69%, 250=63.31% 00:28:27.296 cpu : usr=35.38%, sys=2.13%, ctx=1051, majf=0, minf=1074 00:28:27.296 IO depths : 1=0.1%, 2=5.8%, 4=23.3%, 8=58.1%, 16=12.7%, 32=0.0%, >=64=0.0% 00:28:27.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 complete : 0=0.0%, 4=93.9%, 8=1.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 issued rwts: total=1371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.296 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.296 filename2: (groupid=0, jobs=1): err= 0: pid=89668: Sat Sep 28 09:06:03 2024 00:28:27.296 read: IOPS=167, BW=669KiB/s (685kB/s)(6716KiB/10042msec) 00:28:27.296 slat (usec): min=5, max=8047, avg=31.54, stdev=309.62 00:28:27.296 clat (msec): min=32, max=155, avg=95.42, stdev=28.29 00:28:27.296 lat (msec): min=32, max=155, avg=95.45, stdev=28.28 00:28:27.296 clat percentiles (msec): 00:28:27.296 | 1.00th=[ 36], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 72], 00:28:27.296 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 91], 60.00th=[ 97], 00:28:27.296 | 70.00th=[ 108], 80.00th=[ 131], 90.00th=[ 136], 95.00th=[ 142], 00:28:27.296 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 157], 99.95th=[ 157], 00:28:27.296 | 99.99th=[ 157] 00:28:27.296 bw ( KiB/s): min= 560, max= 896, per=4.54%, avg=667.30, stdev=128.68, samples=20 00:28:27.296 iops : min= 140, max= 224, avg=166.80, stdev=32.18, samples=20 00:28:27.296 lat (msec) : 50=5.60%, 100=58.73%, 250=35.68% 00:28:27.296 cpu : usr=37.85%, sys=2.51%, ctx=1312, majf=0, minf=1073 00:28:27.296 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:28:27.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 issued rwts: total=1679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.296 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.296 filename2: (groupid=0, jobs=1): err= 0: pid=89670: Sat Sep 28 09:06:03 2024 00:28:27.296 read: IOPS=136, BW=547KiB/s (560kB/s)(5504KiB/10059msec) 00:28:27.296 slat (usec): min=5, max=8035, avg=28.92, stdev=276.81 00:28:27.296 clat (msec): min=59, max=196, avg=116.60, stdev=28.06 00:28:27.296 lat (msec): min=59, max=196, avg=116.63, stdev=28.07 00:28:27.296 clat percentiles (msec): 00:28:27.296 | 1.00th=[ 61], 5.00th=[ 75], 10.00th=[ 84], 20.00th=[ 85], 00:28:27.296 | 30.00th=[ 93], 40.00th=[ 116], 50.00th=[ 124], 60.00th=[ 131], 00:28:27.296 | 70.00th=[ 136], 80.00th=[ 140], 90.00th=[ 144], 95.00th=[ 159], 00:28:27.296 | 99.00th=[ 180], 99.50th=[ 180], 99.90th=[ 197], 99.95th=[ 197], 00:28:27.296 | 99.99th=[ 197] 00:28:27.296 bw ( KiB/s): min= 384, max= 769, per=3.70%, avg=544.00, stdev=127.89, samples=20 00:28:27.296 iops : min= 96, max= 192, avg=135.90, stdev=31.92, samples=20 00:28:27.296 lat (msec) : 100=36.26%, 250=63.74% 00:28:27.296 cpu : usr=37.33%, sys=2.02%, ctx=1163, majf=0, minf=1072 00:28:27.296 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.4%, 32=0.0%, >=64=0.0% 00:28:27.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 issued rwts: total=1376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.296 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.296 filename2: (groupid=0, jobs=1): err= 0: pid=89671: Sat Sep 28 09:06:03 2024 00:28:27.296 read: IOPS=166, BW=664KiB/s (680kB/s)(6696KiB/10080msec) 00:28:27.296 slat (usec): min=5, max=8032, avg=24.58, stdev=277.06 00:28:27.296 clat (msec): min=9, max=179, avg=96.00, stdev=32.64 00:28:27.296 lat (msec): min=9, max=179, avg=96.02, stdev=32.64 00:28:27.296 clat percentiles (msec): 00:28:27.296 | 1.00th=[ 14], 5.00th=[ 46], 10.00th=[ 57], 20.00th=[ 72], 00:28:27.296 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 94], 60.00th=[ 99], 00:28:27.296 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 142], 95.00th=[ 144], 00:28:27.296 | 99.00th=[ 146], 99.50th=[ 161], 99.90th=[ 180], 99.95th=[ 180], 00:28:27.296 | 99.99th=[ 180] 00:28:27.296 bw ( KiB/s): min= 480, max= 1192, per=4.53%, avg=665.60, stdev=184.53, samples=20 00:28:27.296 iops : min= 120, max= 298, avg=166.40, stdev=46.13, samples=20 00:28:27.296 lat (msec) : 10=0.96%, 20=1.91%, 50=5.44%, 100=53.17%, 250=38.53% 00:28:27.296 cpu : usr=31.78%, sys=1.94%, ctx=902, majf=0, minf=1073 00:28:27.296 IO depths : 1=0.2%, 2=0.5%, 4=1.6%, 8=81.5%, 16=16.2%, 32=0.0%, >=64=0.0% 00:28:27.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 issued rwts: total=1674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.296 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.296 filename2: (groupid=0, jobs=1): err= 0: pid=89674: Sat Sep 28 09:06:03 2024 00:28:27.296 read: IOPS=164, BW=657KiB/s (673kB/s)(6620KiB/10075msec) 00:28:27.296 slat (usec): min=4, max=8030, avg=29.01, stdev=286.17 00:28:27.296 clat (msec): min=9, max=179, avg=97.02, stdev=29.92 00:28:27.296 lat (msec): min=9, max=179, avg=97.05, stdev=29.92 00:28:27.296 clat percentiles (msec): 00:28:27.296 | 1.00th=[ 19], 5.00th=[ 47], 10.00th=[ 61], 20.00th=[ 77], 00:28:27.296 | 30.00th=[ 84], 40.00th=[ 88], 50.00th=[ 94], 60.00th=[ 99], 00:28:27.296 | 70.00th=[ 118], 80.00th=[ 131], 90.00th=[ 138], 95.00th=[ 144], 00:28:27.296 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 159], 99.95th=[ 180], 00:28:27.296 | 99.99th=[ 180] 00:28:27.296 bw ( KiB/s): min= 536, max= 1136, per=4.47%, avg=657.50, stdev=154.94, samples=20 00:28:27.296 iops : min= 134, max= 284, avg=164.30, stdev=38.77, samples=20 00:28:27.296 lat (msec) : 10=0.97%, 20=0.97%, 50=3.81%, 100=56.50%, 250=37.76% 00:28:27.296 cpu : usr=42.39%, sys=2.56%, ctx=1598, majf=0, minf=1075 00:28:27.296 IO depths : 1=0.1%, 2=1.3%, 4=5.0%, 8=78.3%, 16=15.3%, 32=0.0%, >=64=0.0% 00:28:27.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 complete : 0=0.0%, 4=88.4%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.296 issued rwts: total=1655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.296 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.296 filename2: (groupid=0, jobs=1): err= 0: pid=89675: Sat Sep 28 09:06:03 2024 00:28:27.296 read: IOPS=155, BW=623KiB/s (638kB/s)(6228KiB/10001msec) 00:28:27.296 slat (nsec): min=4280, max=44327, avg=15371.22, stdev=5316.85 00:28:27.296 clat (usec): min=1183, max=192120, avg=102674.40, stdev=32048.06 00:28:27.296 lat (usec): min=1192, max=192146, avg=102689.77, stdev=32047.42 00:28:27.296 clat percentiles (msec): 00:28:27.296 | 1.00th=[ 6], 5.00th=[ 72], 10.00th=[ 75], 20.00th=[ 84], 00:28:27.296 | 30.00th=[ 85], 40.00th=[ 88], 50.00th=[ 96], 60.00th=[ 108], 00:28:27.296 | 70.00th=[ 121], 80.00th=[ 132], 90.00th=[ 144], 95.00th=[ 144], 00:28:27.296 | 99.00th=[ 180], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 192], 00:28:27.297 | 99.99th=[ 192] 00:28:27.297 bw ( KiB/s): min= 400, max= 752, per=4.05%, avg=594.32, stdev=99.57, samples=19 00:28:27.297 iops : min= 100, max= 188, avg=148.53, stdev=24.87, samples=19 00:28:27.297 lat (msec) : 2=0.39%, 4=0.32%, 10=2.18%, 20=0.58%, 50=1.09% 00:28:27.297 lat (msec) : 100=52.02%, 250=43.42% 00:28:27.297 cpu : usr=31.50%, sys=1.61%, ctx=853, majf=0, minf=1075 00:28:27.297 IO depths : 1=0.1%, 2=3.5%, 4=14.1%, 8=68.4%, 16=13.9%, 32=0.0%, >=64=0.0% 00:28:27.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.297 complete : 0=0.0%, 4=91.0%, 8=5.9%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.297 issued rwts: total=1557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.297 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.297 filename2: (groupid=0, jobs=1): err= 0: pid=89676: Sat Sep 28 09:06:03 2024 00:28:27.297 read: IOPS=165, BW=663KiB/s (678kB/s)(6660KiB/10052msec) 00:28:27.297 slat (usec): min=4, max=8035, avg=23.54, stdev=219.83 00:28:27.297 clat (msec): min=31, max=158, avg=96.37, stdev=27.54 00:28:27.297 lat (msec): min=31, max=158, avg=96.39, stdev=27.54 00:28:27.297 clat percentiles (msec): 00:28:27.297 | 1.00th=[ 37], 5.00th=[ 55], 10.00th=[ 63], 20.00th=[ 74], 00:28:27.297 | 30.00th=[ 83], 40.00th=[ 86], 50.00th=[ 93], 60.00th=[ 96], 00:28:27.297 | 70.00th=[ 110], 80.00th=[ 132], 90.00th=[ 136], 95.00th=[ 144], 00:28:27.297 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 159], 00:28:27.297 | 99.99th=[ 159] 00:28:27.297 bw ( KiB/s): min= 536, max= 864, per=4.49%, avg=659.60, stdev=122.47, samples=20 00:28:27.297 iops : min= 134, max= 216, avg=164.90, stdev=30.62, samples=20 00:28:27.297 lat (msec) : 50=4.08%, 100=61.86%, 250=34.05% 00:28:27.297 cpu : usr=35.43%, sys=1.96%, ctx=1043, majf=0, minf=1072 00:28:27.297 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=81.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:28:27.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.297 complete : 0=0.0%, 4=87.4%, 8=12.1%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.297 issued rwts: total=1665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.297 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.297 filename2: (groupid=0, jobs=1): err= 0: pid=89677: Sat Sep 28 09:06:03 2024 00:28:27.297 read: IOPS=157, BW=629KiB/s (644kB/s)(6292KiB/10005msec) 00:28:27.297 slat (usec): min=4, max=8033, avg=26.29, stdev=268.71 00:28:27.297 clat (msec): min=4, max=192, avg=101.60, stdev=29.67 00:28:27.297 lat (msec): min=4, max=192, avg=101.62, stdev=29.66 00:28:27.297 clat percentiles (msec): 00:28:27.297 | 1.00th=[ 7], 5.00th=[ 61], 10.00th=[ 74], 20.00th=[ 83], 00:28:27.297 | 30.00th=[ 85], 40.00th=[ 93], 50.00th=[ 96], 60.00th=[ 107], 00:28:27.297 | 70.00th=[ 122], 80.00th=[ 133], 90.00th=[ 140], 95.00th=[ 142], 00:28:27.297 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 192], 99.95th=[ 192], 00:28:27.297 | 99.99th=[ 192] 00:28:27.297 bw ( KiB/s): min= 488, max= 768, per=4.11%, avg=604.21, stdev=74.87, samples=19 00:28:27.297 iops : min= 122, max= 192, avg=151.05, stdev=18.72, samples=19 00:28:27.297 lat (msec) : 10=2.23%, 20=0.64%, 50=1.02%, 100=51.11%, 250=45.01% 00:28:27.297 cpu : usr=31.48%, sys=1.96%, ctx=942, majf=0, minf=1072 00:28:27.297 IO depths : 1=0.1%, 2=3.1%, 4=12.4%, 8=70.4%, 16=14.0%, 32=0.0%, >=64=0.0% 00:28:27.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.297 complete : 0=0.0%, 4=90.4%, 8=6.9%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:27.297 issued rwts: total=1573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:27.297 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:27.297 00:28:27.297 Run status group 0 (all jobs): 00:28:27.297 READ: bw=14.3MiB/s (15.0MB/s), 543KiB/s-674KiB/s (556kB/s-690kB/s), io=145MiB (152MB), run=10001-10100msec 00:28:27.297 ----------------------------------------------------- 00:28:27.297 Suppressions used: 00:28:27.297 count bytes template 00:28:27.297 45 402 /usr/src/fio/parse.c 00:28:27.297 1 8 libtcmalloc_minimal.so 00:28:27.297 1 904 libcrypto.so 00:28:27.297 ----------------------------------------------------- 00:28:27.297 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:27.297 bdev_null0 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:27.297 [2024-09-28 09:06:05.085357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:27.297 bdev_null1 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:27.297 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:27.298 { 00:28:27.298 "params": { 00:28:27.298 "name": "Nvme$subsystem", 00:28:27.298 "trtype": "$TEST_TRANSPORT", 00:28:27.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.298 "adrfam": "ipv4", 00:28:27.298 "trsvcid": "$NVMF_PORT", 00:28:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.298 "hdgst": ${hdgst:-false}, 00:28:27.298 "ddgst": ${ddgst:-false} 00:28:27.298 }, 00:28:27.298 "method": "bdev_nvme_attach_controller" 00:28:27.298 } 00:28:27.298 EOF 00:28:27.298 )") 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:27.298 { 00:28:27.298 "params": { 00:28:27.298 "name": "Nvme$subsystem", 00:28:27.298 "trtype": "$TEST_TRANSPORT", 00:28:27.298 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:27.298 "adrfam": "ipv4", 00:28:27.298 "trsvcid": "$NVMF_PORT", 00:28:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:27.298 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:27.298 "hdgst": ${hdgst:-false}, 00:28:27.298 "ddgst": ${ddgst:-false} 00:28:27.298 }, 00:28:27.298 "method": "bdev_nvme_attach_controller" 00:28:27.298 } 00:28:27.298 EOF 00:28:27.298 )") 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:27.298 "params": { 00:28:27.298 "name": "Nvme0", 00:28:27.298 "trtype": "tcp", 00:28:27.298 "traddr": "10.0.0.3", 00:28:27.298 "adrfam": "ipv4", 00:28:27.298 "trsvcid": "4420", 00:28:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:27.298 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:27.298 "hdgst": false, 00:28:27.298 "ddgst": false 00:28:27.298 }, 00:28:27.298 "method": "bdev_nvme_attach_controller" 00:28:27.298 },{ 00:28:27.298 "params": { 00:28:27.298 "name": "Nvme1", 00:28:27.298 "trtype": "tcp", 00:28:27.298 "traddr": "10.0.0.3", 00:28:27.298 "adrfam": "ipv4", 00:28:27.298 "trsvcid": "4420", 00:28:27.298 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:27.298 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:27.298 "hdgst": false, 00:28:27.298 "ddgst": false 00:28:27.298 }, 00:28:27.298 "method": "bdev_nvme_attach_controller" 00:28:27.298 }' 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # break 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:27.298 09:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:27.557 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:27.557 ... 00:28:27.557 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:27.557 ... 00:28:27.557 fio-3.35 00:28:27.557 Starting 4 threads 00:28:34.118 00:28:34.118 filename0: (groupid=0, jobs=1): err= 0: pid=89806: Sat Sep 28 09:06:11 2024 00:28:34.118 read: IOPS=1936, BW=15.1MiB/s (15.9MB/s)(75.7MiB/5001msec) 00:28:34.118 slat (nsec): min=3757, max=61945, avg=14066.97, stdev=5511.68 00:28:34.118 clat (usec): min=771, max=7511, avg=4080.69, stdev=965.36 00:28:34.118 lat (usec): min=780, max=7556, avg=4094.76, stdev=964.90 00:28:34.118 clat percentiles (usec): 00:28:34.118 | 1.00th=[ 922], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 3359], 00:28:34.118 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:28:34.118 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 4948], 95.00th=[ 5211], 00:28:34.118 | 99.00th=[ 5800], 99.50th=[ 6587], 99.90th=[ 7177], 99.95th=[ 7439], 00:28:34.118 | 99.99th=[ 7504] 00:28:34.118 bw ( KiB/s): min=14064, max=17760, per=26.53%, avg=15527.11, stdev=1478.86, samples=9 00:28:34.118 iops : min= 1758, max= 2220, avg=1940.89, stdev=184.86, samples=9 00:28:34.118 lat (usec) : 1000=1.03% 00:28:34.118 lat (msec) : 2=2.16%, 4=19.61%, 10=77.20% 00:28:34.118 cpu : usr=90.86%, sys=8.16%, ctx=11, majf=0, minf=1075 00:28:34.118 IO depths : 1=0.1%, 2=13.9%, 4=56.2%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:34.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.118 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.118 issued rwts: total=9686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:34.118 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:34.118 filename0: (groupid=0, jobs=1): err= 0: pid=89807: Sat Sep 28 09:06:11 2024 00:28:34.118 read: IOPS=1724, BW=13.5MiB/s (14.1MB/s)(67.4MiB/5002msec) 00:28:34.118 slat (usec): min=5, max=140, avg=16.44, stdev= 4.77 00:28:34.118 clat (usec): min=1420, max=8535, avg=4576.45, stdev=530.56 00:28:34.118 lat (usec): min=1435, max=8557, avg=4592.89, stdev=530.63 00:28:34.118 clat percentiles (usec): 00:28:34.118 | 1.00th=[ 2868], 5.00th=[ 4080], 10.00th=[ 4146], 20.00th=[ 4228], 00:28:34.118 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4621], 00:28:34.118 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5211], 95.00th=[ 5407], 00:28:34.118 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 8160], 99.95th=[ 8225], 00:28:34.118 | 99.99th=[ 8586] 00:28:34.118 bw ( KiB/s): min=12288, max=14608, per=23.30%, avg=13639.11, stdev=1002.27, samples=9 00:28:34.118 iops : min= 1536, max= 1826, avg=1704.89, stdev=125.28, samples=9 00:28:34.118 lat (msec) : 2=0.23%, 4=2.71%, 10=97.05% 00:28:34.118 cpu : usr=90.92%, sys=8.16%, ctx=78, majf=0, minf=1073 00:28:34.118 IO depths : 1=0.1%, 2=23.6%, 4=50.9%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:34.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.118 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.118 issued rwts: total=8624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:34.118 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:34.118 filename1: (groupid=0, jobs=1): err= 0: pid=89808: Sat Sep 28 09:06:11 2024 00:28:34.118 read: IOPS=1932, BW=15.1MiB/s (15.8MB/s)(75.5MiB/5003msec) 00:28:34.118 slat (nsec): min=3627, max=65426, avg=14035.33, stdev=5224.05 00:28:34.118 clat (usec): min=1014, max=10055, avg=4094.37, stdev=873.87 00:28:34.118 lat (usec): min=1023, max=10120, avg=4108.40, stdev=874.05 00:28:34.118 clat percentiles (usec): 00:28:34.118 | 1.00th=[ 1336], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 3326], 00:28:34.118 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:28:34.118 | 70.00th=[ 4490], 80.00th=[ 4686], 90.00th=[ 4948], 95.00th=[ 5080], 00:28:34.118 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 7832], 99.95th=[ 9765], 00:28:34.118 | 99.99th=[10028] 00:28:34.118 bw ( KiB/s): min=14208, max=17248, per=26.47%, avg=15495.11, stdev=1280.68, samples=9 00:28:34.118 iops : min= 1776, max= 2156, avg=1936.89, stdev=160.08, samples=9 00:28:34.118 lat (msec) : 2=1.69%, 4=20.69%, 10=77.62%, 20=0.01% 00:28:34.118 cpu : usr=91.42%, sys=7.52%, ctx=272, majf=0, minf=1076 00:28:34.118 IO depths : 1=0.1%, 2=14.0%, 4=56.2%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:34.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.118 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.118 issued rwts: total=9668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:34.118 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:34.118 filename1: (groupid=0, jobs=1): err= 0: pid=89809: Sat Sep 28 09:06:11 2024 00:28:34.118 read: IOPS=1724, BW=13.5MiB/s (14.1MB/s)(67.4MiB/5001msec) 00:28:34.118 slat (nsec): min=5362, max=75011, avg=16308.62, stdev=4536.72 00:28:34.118 clat (usec): min=1402, max=8484, avg=4575.80, stdev=523.77 00:28:34.118 lat (usec): min=1417, max=8517, avg=4592.11, stdev=523.88 00:28:34.118 clat percentiles (usec): 00:28:34.118 | 1.00th=[ 2868], 5.00th=[ 4080], 10.00th=[ 4146], 20.00th=[ 4228], 00:28:34.119 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4621], 00:28:34.119 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5211], 95.00th=[ 5407], 00:28:34.119 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 7046], 99.95th=[ 7111], 00:28:34.119 | 99.99th=[ 8455] 00:28:34.119 bw ( KiB/s): min=12288, max=14621, per=23.31%, avg=13642.33, stdev=1005.76, samples=9 00:28:34.119 iops : min= 1536, max= 1827, avg=1705.22, stdev=125.64, samples=9 00:28:34.119 lat (msec) : 2=0.23%, 4=2.71%, 10=97.05% 00:28:34.119 cpu : usr=91.88%, sys=7.20%, ctx=91, majf=0, minf=1074 00:28:34.119 IO depths : 1=0.1%, 2=23.6%, 4=50.9%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:34.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.119 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:34.119 issued rwts: total=8624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:34.119 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:34.119 00:28:34.119 Run status group 0 (all jobs): 00:28:34.119 READ: bw=57.2MiB/s (59.9MB/s), 13.5MiB/s-15.1MiB/s (14.1MB/s-15.9MB/s), io=286MiB (300MB), run=5001-5003msec 00:28:34.377 ----------------------------------------------------- 00:28:34.377 Suppressions used: 00:28:34.377 count bytes template 00:28:34.377 6 52 /usr/src/fio/parse.c 00:28:34.377 1 8 libtcmalloc_minimal.so 00:28:34.377 1 904 libcrypto.so 00:28:34.377 ----------------------------------------------------- 00:28:34.377 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.377 ************************************ 00:28:34.377 END TEST fio_dif_rand_params 00:28:34.377 ************************************ 00:28:34.377 00:28:34.377 real 0m26.815s 00:28:34.377 user 2m6.859s 00:28:34.377 sys 0m9.235s 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:34.377 09:06:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:34.377 09:06:12 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:34.377 09:06:12 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:34.377 09:06:12 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:34.377 09:06:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:34.377 ************************************ 00:28:34.377 START TEST fio_dif_digest 00:28:34.377 ************************************ 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.377 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:34.636 bdev_null0 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:34.636 [2024-09-28 09:06:12.395213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:28:34.636 { 00:28:34.636 "params": { 00:28:34.636 "name": "Nvme$subsystem", 00:28:34.636 "trtype": "$TEST_TRANSPORT", 00:28:34.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.636 "adrfam": "ipv4", 00:28:34.636 "trsvcid": "$NVMF_PORT", 00:28:34.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.636 "hdgst": ${hdgst:-false}, 00:28:34.636 "ddgst": ${ddgst:-false} 00:28:34.636 }, 00:28:34.636 "method": "bdev_nvme_attach_controller" 00:28:34.636 } 00:28:34.636 EOF 00:28:34.636 )") 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:28:34.636 "params": { 00:28:34.636 "name": "Nvme0", 00:28:34.636 "trtype": "tcp", 00:28:34.636 "traddr": "10.0.0.3", 00:28:34.636 "adrfam": "ipv4", 00:28:34.636 "trsvcid": "4420", 00:28:34.636 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:34.636 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:34.636 "hdgst": true, 00:28:34.636 "ddgst": true 00:28:34.636 }, 00:28:34.636 "method": "bdev_nvme_attach_controller" 00:28:34.636 }' 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # break 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:34.636 09:06:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:34.636 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:34.636 ... 00:28:34.636 fio-3.35 00:28:34.636 Starting 3 threads 00:28:46.870 00:28:46.870 filename0: (groupid=0, jobs=1): err= 0: pid=89919: Sat Sep 28 09:06:23 2024 00:28:46.870 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(261MiB/10008msec) 00:28:46.870 slat (nsec): min=5500, max=58618, avg=18218.68, stdev=6387.94 00:28:46.870 clat (usec): min=13332, max=19645, avg=14357.67, stdev=651.99 00:28:46.870 lat (usec): min=13348, max=19667, avg=14375.89, stdev=652.70 00:28:46.870 clat percentiles (usec): 00:28:46.870 | 1.00th=[13435], 5.00th=[13435], 10.00th=[13566], 20.00th=[13960], 00:28:46.870 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14222], 60.00th=[14353], 00:28:46.870 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15008], 95.00th=[15533], 00:28:46.870 | 99.00th=[16450], 99.50th=[16909], 99.90th=[19530], 99.95th=[19530], 00:28:46.870 | 99.99th=[19530] 00:28:46.870 bw ( KiB/s): min=24576, max=28416, per=33.33%, avg=26649.60, stdev=866.75, samples=20 00:28:46.870 iops : min= 192, max= 222, avg=208.20, stdev= 6.77, samples=20 00:28:46.870 lat (msec) : 20=100.00% 00:28:46.870 cpu : usr=92.22%, sys=7.20%, ctx=119, majf=0, minf=1073 00:28:46.870 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:46.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.870 issued rwts: total=2085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:46.870 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:46.870 filename0: (groupid=0, jobs=1): err= 0: pid=89920: Sat Sep 28 09:06:23 2024 00:28:46.870 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(261MiB/10014msec) 00:28:46.870 slat (nsec): min=6840, max=62347, avg=18018.57, stdev=6404.78 00:28:46.870 clat (usec): min=13263, max=20638, avg=14366.29, stdev=695.78 00:28:46.870 lat (usec): min=13272, max=20666, avg=14384.31, stdev=696.42 00:28:46.870 clat percentiles (usec): 00:28:46.870 | 1.00th=[13304], 5.00th=[13435], 10.00th=[13566], 20.00th=[13960], 00:28:46.870 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14222], 60.00th=[14353], 00:28:46.870 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15008], 95.00th=[15533], 00:28:46.870 | 99.00th=[16581], 99.50th=[18220], 99.90th=[20579], 99.95th=[20579], 00:28:46.870 | 99.99th=[20579] 00:28:46.870 bw ( KiB/s): min=25344, max=28416, per=33.33%, avg=26649.60, stdev=791.88, samples=20 00:28:46.870 iops : min= 198, max= 222, avg=208.20, stdev= 6.19, samples=20 00:28:46.870 lat (msec) : 20=99.86%, 50=0.14% 00:28:46.870 cpu : usr=92.17%, sys=7.23%, ctx=19, majf=0, minf=1076 00:28:46.870 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:46.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.870 issued rwts: total=2085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:46.870 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:46.870 filename0: (groupid=0, jobs=1): err= 0: pid=89921: Sat Sep 28 09:06:23 2024 00:28:46.870 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(261MiB/10010msec) 00:28:46.870 slat (nsec): min=8253, max=65683, avg=18252.14, stdev=6389.52 00:28:46.870 clat (usec): min=13324, max=19655, avg=14359.98, stdev=657.55 00:28:46.870 lat (usec): min=13340, max=19677, avg=14378.23, stdev=658.30 00:28:46.870 clat percentiles (usec): 00:28:46.870 | 1.00th=[13435], 5.00th=[13435], 10.00th=[13566], 20.00th=[13960], 00:28:46.870 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14222], 60.00th=[14353], 00:28:46.870 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15139], 95.00th=[15533], 00:28:46.870 | 99.00th=[16581], 99.50th=[16909], 99.90th=[19530], 99.95th=[19530], 00:28:46.870 | 99.99th=[19530] 00:28:46.870 bw ( KiB/s): min=25344, max=28416, per=33.33%, avg=26649.60, stdev=791.88, samples=20 00:28:46.870 iops : min= 198, max= 222, avg=208.20, stdev= 6.19, samples=20 00:28:46.870 lat (msec) : 20=100.00% 00:28:46.870 cpu : usr=92.38%, sys=7.00%, ctx=43, majf=0, minf=1075 00:28:46.870 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:46.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.870 issued rwts: total=2085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:46.870 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:46.870 00:28:46.870 Run status group 0 (all jobs): 00:28:46.870 READ: bw=78.1MiB/s (81.9MB/s), 26.0MiB/s-26.0MiB/s (27.3MB/s-27.3MB/s), io=782MiB (820MB), run=10008-10014msec 00:28:46.870 ----------------------------------------------------- 00:28:46.870 Suppressions used: 00:28:46.870 count bytes template 00:28:46.870 5 44 /usr/src/fio/parse.c 00:28:46.870 1 8 libtcmalloc_minimal.so 00:28:46.870 1 904 libcrypto.so 00:28:46.870 ----------------------------------------------------- 00:28:46.870 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:46.871 ************************************ 00:28:46.871 END TEST fio_dif_digest 00:28:46.871 ************************************ 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.871 00:28:46.871 real 0m12.120s 00:28:46.871 user 0m29.401s 00:28:46.871 sys 0m2.492s 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:46.871 09:06:24 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:46.871 09:06:24 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:46.871 09:06:24 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:46.871 09:06:24 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:28:46.871 09:06:24 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:28:46.871 09:06:24 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:46.871 09:06:24 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:28:46.871 09:06:24 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:46.871 09:06:24 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:46.871 rmmod nvme_tcp 00:28:46.871 rmmod nvme_fabrics 00:28:46.871 rmmod nvme_keyring 00:28:46.871 09:06:24 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:46.871 09:06:24 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:28:46.871 09:06:24 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:28:46.871 09:06:24 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 89167 ']' 00:28:46.871 09:06:24 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 89167 00:28:46.871 09:06:24 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 89167 ']' 00:28:46.871 09:06:24 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 89167 00:28:46.871 09:06:24 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:28:46.871 09:06:24 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:46.871 09:06:24 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89167 00:28:46.871 killing process with pid 89167 00:28:46.871 09:06:24 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:46.871 09:06:24 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:46.871 09:06:24 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89167' 00:28:46.871 09:06:24 nvmf_dif -- common/autotest_common.sh@969 -- # kill 89167 00:28:46.871 09:06:24 nvmf_dif -- common/autotest_common.sh@974 -- # wait 89167 00:28:47.803 09:06:25 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:28:47.803 09:06:25 nvmf_dif -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:48.065 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:48.065 Waiting for block devices as requested 00:28:48.065 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:48.065 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:48.326 09:06:26 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:48.585 09:06:26 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:48.585 09:06:26 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:48.585 09:06:26 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.585 09:06:26 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:48.585 09:06:26 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.585 09:06:26 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:28:48.585 00:28:48.585 real 1m7.726s 00:28:48.585 user 4m2.867s 00:28:48.585 sys 0m19.838s 00:28:48.585 09:06:26 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:48.585 09:06:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:48.585 ************************************ 00:28:48.585 END TEST nvmf_dif 00:28:48.585 ************************************ 00:28:48.585 09:06:26 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:48.585 09:06:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:48.585 09:06:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:48.585 09:06:26 -- common/autotest_common.sh@10 -- # set +x 00:28:48.585 ************************************ 00:28:48.585 START TEST nvmf_abort_qd_sizes 00:28:48.585 ************************************ 00:28:48.585 09:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:48.585 * Looking for test storage... 00:28:48.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:48.585 09:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:48.585 09:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:28:48.585 09:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:48.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.845 --rc genhtml_branch_coverage=1 00:28:48.845 --rc genhtml_function_coverage=1 00:28:48.845 --rc genhtml_legend=1 00:28:48.845 --rc geninfo_all_blocks=1 00:28:48.845 --rc geninfo_unexecuted_blocks=1 00:28:48.845 00:28:48.845 ' 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:48.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.845 --rc genhtml_branch_coverage=1 00:28:48.845 --rc genhtml_function_coverage=1 00:28:48.845 --rc genhtml_legend=1 00:28:48.845 --rc geninfo_all_blocks=1 00:28:48.845 --rc geninfo_unexecuted_blocks=1 00:28:48.845 00:28:48.845 ' 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:48.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.845 --rc genhtml_branch_coverage=1 00:28:48.845 --rc genhtml_function_coverage=1 00:28:48.845 --rc genhtml_legend=1 00:28:48.845 --rc geninfo_all_blocks=1 00:28:48.845 --rc geninfo_unexecuted_blocks=1 00:28:48.845 00:28:48.845 ' 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:48.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:48.845 --rc genhtml_branch_coverage=1 00:28:48.845 --rc genhtml_function_coverage=1 00:28:48.845 --rc genhtml_legend=1 00:28:48.845 --rc geninfo_all_blocks=1 00:28:48.845 --rc geninfo_unexecuted_blocks=1 00:28:48.845 00:28:48.845 ' 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.845 09:06:26 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:48.846 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@456 -- # nvmf_veth_init 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:48.846 Cannot find device "nvmf_init_br" 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:48.846 Cannot find device "nvmf_init_br2" 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:48.846 Cannot find device "nvmf_tgt_br" 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:48.846 Cannot find device "nvmf_tgt_br2" 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:48.846 Cannot find device "nvmf_init_br" 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:48.846 Cannot find device "nvmf_init_br2" 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:48.846 Cannot find device "nvmf_tgt_br" 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:48.846 Cannot find device "nvmf_tgt_br2" 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:48.846 Cannot find device "nvmf_br" 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:48.846 Cannot find device "nvmf_init_if" 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:48.846 Cannot find device "nvmf_init_if2" 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:48.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:48.846 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:48.846 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:49.106 09:06:26 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:49.106 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:49.106 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:28:49.106 00:28:49.106 --- 10.0.0.3 ping statistics --- 00:28:49.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.106 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:49.106 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:49.106 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:28:49.106 00:28:49.106 --- 10.0.0.4 ping statistics --- 00:28:49.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.106 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:49.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:49.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:28:49.106 00:28:49.106 --- 10.0.0.1 ping statistics --- 00:28:49.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.106 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:49.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:49.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:28:49.106 00:28:49.106 --- 10.0.0.2 ping statistics --- 00:28:49.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:49.106 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # return 0 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:28:49.106 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:50.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:50.041 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:50.041 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:50.041 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.041 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:28:50.041 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=90583 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 90583 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 90583 ']' 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:50.042 09:06:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:50.299 [2024-09-28 09:06:28.070790] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:28:50.299 [2024-09-28 09:06:28.070975] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.299 [2024-09-28 09:06:28.249604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.556 [2024-09-28 09:06:28.490639] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.556 [2024-09-28 09:06:28.490709] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.556 [2024-09-28 09:06:28.490734] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.556 [2024-09-28 09:06:28.490749] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.556 [2024-09-28 09:06:28.490765] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.556 [2024-09-28 09:06:28.490980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.556 [2024-09-28 09:06:28.491689] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.556 [2024-09-28 09:06:28.491820] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.556 [2024-09-28 09:06:28.491976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.814 [2024-09-28 09:06:28.681569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:51.072 09:06:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:51.072 09:06:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:28:51.072 09:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:28:51.072 09:06:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:51.072 09:06:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:51.331 09:06:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:51.331 ************************************ 00:28:51.331 START TEST spdk_target_abort 00:28:51.331 ************************************ 00:28:51.331 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:28:51.331 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:51.331 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:51.331 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.331 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:51.331 spdk_targetn1 00:28:51.331 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.331 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:51.332 [2024-09-28 09:06:29.212371] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:51.332 [2024-09-28 09:06:29.246245] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:51.332 09:06:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:54.612 Initializing NVMe Controllers 00:28:54.612 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:54.612 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:54.612 Initialization complete. Launching workers. 00:28:54.612 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8695, failed: 0 00:28:54.612 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1023, failed to submit 7672 00:28:54.612 success 834, unsuccessful 189, failed 0 00:28:54.612 09:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:54.612 09:06:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:58.790 Initializing NVMe Controllers 00:28:58.790 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:58.790 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:58.790 Initialization complete. Launching workers. 00:28:58.790 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9016, failed: 0 00:28:58.790 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1176, failed to submit 7840 00:28:58.790 success 430, unsuccessful 746, failed 0 00:28:58.790 09:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:58.790 09:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:02.071 Initializing NVMe Controllers 00:29:02.071 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:29:02.071 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:02.071 Initialization complete. Launching workers. 00:29:02.071 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 28226, failed: 0 00:29:02.071 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2177, failed to submit 26049 00:29:02.071 success 351, unsuccessful 1826, failed 0 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 90583 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 90583 ']' 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 90583 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90583 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90583' 00:29:02.072 killing process with pid 90583 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 90583 00:29:02.072 09:06:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 90583 00:29:02.638 00:29:02.638 real 0m11.379s 00:29:02.638 user 0m44.897s 00:29:02.638 sys 0m2.338s 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:02.638 ************************************ 00:29:02.638 END TEST spdk_target_abort 00:29:02.638 ************************************ 00:29:02.638 09:06:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:02.638 09:06:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:02.638 09:06:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:02.638 09:06:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:02.638 ************************************ 00:29:02.638 START TEST kernel_target_abort 00:29:02.638 ************************************ 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:29:02.638 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:02.639 09:06:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:03.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:03.207 Waiting for block devices as requested 00:29:03.207 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:03.207 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:03.779 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:29:03.780 No valid GPT data, bailing 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:29:03.780 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:29:04.039 No valid GPT data, bailing 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:29:04.039 No valid GPT data, bailing 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:29:04.039 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:29:04.040 No valid GPT data, bailing 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:04.040 09:06:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 --hostid=b09210cb-7022-43fe-9129-03e098f7a403 -a 10.0.0.1 -t tcp -s 4420 00:29:04.040 00:29:04.040 Discovery Log Number of Records 2, Generation counter 2 00:29:04.040 =====Discovery Log Entry 0====== 00:29:04.040 trtype: tcp 00:29:04.040 adrfam: ipv4 00:29:04.040 subtype: current discovery subsystem 00:29:04.040 treq: not specified, sq flow control disable supported 00:29:04.040 portid: 1 00:29:04.040 trsvcid: 4420 00:29:04.040 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:04.040 traddr: 10.0.0.1 00:29:04.040 eflags: none 00:29:04.040 sectype: none 00:29:04.040 =====Discovery Log Entry 1====== 00:29:04.040 trtype: tcp 00:29:04.040 adrfam: ipv4 00:29:04.040 subtype: nvme subsystem 00:29:04.040 treq: not specified, sq flow control disable supported 00:29:04.040 portid: 1 00:29:04.040 trsvcid: 4420 00:29:04.040 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:04.040 traddr: 10.0.0.1 00:29:04.040 eflags: none 00:29:04.040 sectype: none 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:04.040 09:06:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:07.329 Initializing NVMe Controllers 00:29:07.329 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:07.329 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:07.329 Initialization complete. Launching workers. 00:29:07.329 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 24711, failed: 0 00:29:07.329 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24711, failed to submit 0 00:29:07.329 success 0, unsuccessful 24711, failed 0 00:29:07.329 09:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:07.329 09:06:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:10.617 Initializing NVMe Controllers 00:29:10.617 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:10.617 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:10.617 Initialization complete. Launching workers. 00:29:10.617 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 54328, failed: 0 00:29:10.617 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21597, failed to submit 32731 00:29:10.617 success 0, unsuccessful 21597, failed 0 00:29:10.617 09:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:10.617 09:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:13.906 Initializing NVMe Controllers 00:29:13.906 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:13.906 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:13.906 Initialization complete. Launching workers. 00:29:13.906 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59306, failed: 0 00:29:13.906 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14810, failed to submit 44496 00:29:13.906 success 0, unsuccessful 14810, failed 0 00:29:13.906 09:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:29:13.906 09:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:29:13.906 09:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:29:13.906 09:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:13.906 09:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:13.906 09:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:13.906 09:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:13.906 09:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:29:13.906 09:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:29:13.906 09:06:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:14.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:15.145 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:29:15.428 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:29:15.428 00:29:15.428 real 0m12.660s 00:29:15.428 user 0m6.316s 00:29:15.428 sys 0m3.999s 00:29:15.428 09:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:15.428 09:06:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:15.428 ************************************ 00:29:15.428 END TEST kernel_target_abort 00:29:15.428 ************************************ 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.428 rmmod nvme_tcp 00:29:15.428 rmmod nvme_fabrics 00:29:15.428 rmmod nvme_keyring 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 90583 ']' 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 90583 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 90583 ']' 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 90583 00:29:15.428 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90583) - No such process 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 90583 is not found' 00:29:15.428 Process with pid 90583 is not found 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:29:15.428 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:16.006 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:16.006 Waiting for block devices as requested 00:29:16.006 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:16.006 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:16.006 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:29:16.006 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:29:16.006 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:29:16.006 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:29:16.006 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:29:16.006 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:29:16.006 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:16.006 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:16.006 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:16.006 09:06:53 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:29:16.266 00:29:16.266 real 0m27.776s 00:29:16.266 user 0m52.652s 00:29:16.266 sys 0m7.781s 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:16.266 09:06:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:16.266 ************************************ 00:29:16.266 END TEST nvmf_abort_qd_sizes 00:29:16.266 ************************************ 00:29:16.266 09:06:54 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:29:16.266 09:06:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:16.266 09:06:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:16.266 09:06:54 -- common/autotest_common.sh@10 -- # set +x 00:29:16.266 ************************************ 00:29:16.266 START TEST keyring_file 00:29:16.266 ************************************ 00:29:16.266 09:06:54 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:29:16.527 * Looking for test storage... 00:29:16.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:16.527 09:06:54 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:16.527 09:06:54 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:29:16.527 09:06:54 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:16.527 09:06:54 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@345 -- # : 1 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@353 -- # local d=1 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@355 -- # echo 1 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@353 -- # local d=2 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@355 -- # echo 2 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@368 -- # return 0 00:29:16.527 09:06:54 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.527 09:06:54 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:16.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.527 --rc genhtml_branch_coverage=1 00:29:16.527 --rc genhtml_function_coverage=1 00:29:16.527 --rc genhtml_legend=1 00:29:16.527 --rc geninfo_all_blocks=1 00:29:16.527 --rc geninfo_unexecuted_blocks=1 00:29:16.527 00:29:16.527 ' 00:29:16.527 09:06:54 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:16.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.527 --rc genhtml_branch_coverage=1 00:29:16.527 --rc genhtml_function_coverage=1 00:29:16.527 --rc genhtml_legend=1 00:29:16.527 --rc geninfo_all_blocks=1 00:29:16.527 --rc geninfo_unexecuted_blocks=1 00:29:16.527 00:29:16.527 ' 00:29:16.527 09:06:54 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:16.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.527 --rc genhtml_branch_coverage=1 00:29:16.527 --rc genhtml_function_coverage=1 00:29:16.527 --rc genhtml_legend=1 00:29:16.527 --rc geninfo_all_blocks=1 00:29:16.527 --rc geninfo_unexecuted_blocks=1 00:29:16.527 00:29:16.527 ' 00:29:16.527 09:06:54 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:16.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.527 --rc genhtml_branch_coverage=1 00:29:16.527 --rc genhtml_function_coverage=1 00:29:16.527 --rc genhtml_legend=1 00:29:16.527 --rc geninfo_all_blocks=1 00:29:16.527 --rc geninfo_unexecuted_blocks=1 00:29:16.527 00:29:16.527 ' 00:29:16.527 09:06:54 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:16.527 09:06:54 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.527 09:06:54 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.527 09:06:54 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.527 09:06:54 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.527 09:06:54 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.527 09:06:54 keyring_file -- paths/export.sh@5 -- # export PATH 00:29:16.527 09:06:54 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@51 -- # : 0 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:16.527 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.527 09:06:54 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.527 09:06:54 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:16.527 09:06:54 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:16.528 09:06:54 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:16.528 09:06:54 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:16.528 09:06:54 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:16.528 09:06:54 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:16.528 09:06:54 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:16.528 09:06:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:16.528 09:06:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:16.528 09:06:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:16.528 09:06:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:16.528 09:06:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:16.528 09:06:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.WowwppkCJR 00:29:16.528 09:06:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:16.528 09:06:54 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:16.528 09:06:54 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:29:16.528 09:06:54 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:29:16.528 09:06:54 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:29:16.528 09:06:54 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:29:16.528 09:06:54 keyring_file -- nvmf/common.sh@729 -- # python - 00:29:16.787 09:06:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WowwppkCJR 00:29:16.787 09:06:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.WowwppkCJR 00:29:16.787 09:06:54 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.WowwppkCJR 00:29:16.787 09:06:54 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:16.787 09:06:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:16.787 09:06:54 keyring_file -- keyring/common.sh@17 -- # name=key1 00:29:16.787 09:06:54 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:16.787 09:06:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:16.787 09:06:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:16.787 09:06:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.BBwGQvg8uk 00:29:16.787 09:06:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:16.787 09:06:54 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:16.787 09:06:54 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:29:16.787 09:06:54 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:29:16.787 09:06:54 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:29:16.787 09:06:54 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:29:16.787 09:06:54 keyring_file -- nvmf/common.sh@729 -- # python - 00:29:16.787 09:06:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.BBwGQvg8uk 00:29:16.787 09:06:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.BBwGQvg8uk 00:29:16.787 09:06:54 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.BBwGQvg8uk 00:29:16.787 09:06:54 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.788 09:06:54 keyring_file -- keyring/file.sh@30 -- # tgtpid=91711 00:29:16.788 09:06:54 keyring_file -- keyring/file.sh@32 -- # waitforlisten 91711 00:29:16.788 09:06:54 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 91711 ']' 00:29:16.788 09:06:54 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.788 09:06:54 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:16.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.788 09:06:54 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.788 09:06:54 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:16.788 09:06:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:16.788 [2024-09-28 09:06:54.727494] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:29:16.788 [2024-09-28 09:06:54.728344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91711 ] 00:29:17.047 [2024-09-28 09:06:54.903439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.307 [2024-09-28 09:06:55.129174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.566 [2024-09-28 09:06:55.335003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:29:17.891 09:06:55 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:17.891 [2024-09-28 09:06:55.768422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.891 null0 00:29:17.891 [2024-09-28 09:06:55.800394] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:17.891 [2024-09-28 09:06:55.800692] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.891 09:06:55 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:17.891 [2024-09-28 09:06:55.832384] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:29:17.891 request: 00:29:17.891 { 00:29:17.891 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:17.891 "secure_channel": false, 00:29:17.891 "listen_address": { 00:29:17.891 "trtype": "tcp", 00:29:17.891 "traddr": "127.0.0.1", 00:29:17.891 "trsvcid": "4420" 00:29:17.891 }, 00:29:17.891 "method": "nvmf_subsystem_add_listener", 00:29:17.891 "req_id": 1 00:29:17.891 } 00:29:17.891 Got JSON-RPC error response 00:29:17.891 response: 00:29:17.891 { 00:29:17.891 "code": -32602, 00:29:17.891 "message": "Invalid parameters" 00:29:17.891 } 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:17.891 09:06:55 keyring_file -- keyring/file.sh@47 -- # bperfpid=91728 00:29:17.891 09:06:55 keyring_file -- keyring/file.sh@49 -- # waitforlisten 91728 /var/tmp/bperf.sock 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 91728 ']' 00:29:17.891 09:06:55 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:17.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:17.891 09:06:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:18.150 [2024-09-28 09:06:55.953121] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:29:18.150 [2024-09-28 09:06:55.953312] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91728 ] 00:29:18.150 [2024-09-28 09:06:56.123949] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.409 [2024-09-28 09:06:56.270596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.668 [2024-09-28 09:06:56.421886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:18.927 09:06:56 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:18.927 09:06:56 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:29:18.927 09:06:56 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WowwppkCJR 00:29:18.927 09:06:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WowwppkCJR 00:29:19.186 09:06:57 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BBwGQvg8uk 00:29:19.186 09:06:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BBwGQvg8uk 00:29:19.445 09:06:57 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:29:19.445 09:06:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:29:19.445 09:06:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:19.445 09:06:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:19.446 09:06:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:19.704 09:06:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.WowwppkCJR == \/\t\m\p\/\t\m\p\.\W\o\w\w\p\p\k\C\J\R ]] 00:29:19.704 09:06:57 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:29:19.704 09:06:57 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:29:19.704 09:06:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:19.704 09:06:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:19.704 09:06:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:19.963 09:06:57 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.BBwGQvg8uk == \/\t\m\p\/\t\m\p\.\B\B\w\G\Q\v\g\8\u\k ]] 00:29:19.963 09:06:57 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:29:19.963 09:06:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:19.963 09:06:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:19.963 09:06:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:19.963 09:06:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:19.963 09:06:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:20.223 09:06:58 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:20.482 09:06:58 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:29:20.482 09:06:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:20.482 09:06:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:20.482 09:06:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:20.482 09:06:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:20.482 09:06:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:20.742 09:06:58 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:29:20.742 09:06:58 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:20.742 09:06:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:20.742 [2024-09-28 09:06:58.693035] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:21.001 nvme0n1 00:29:21.001 09:06:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:29:21.001 09:06:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:21.001 09:06:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:21.001 09:06:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:21.001 09:06:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:21.001 09:06:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:21.260 09:06:59 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:29:21.260 09:06:59 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:29:21.260 09:06:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:21.260 09:06:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:21.260 09:06:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:21.260 09:06:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:21.260 09:06:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:21.520 09:06:59 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:29:21.520 09:06:59 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:21.520 Running I/O for 1 seconds... 00:29:22.456 9524.00 IOPS, 37.20 MiB/s 00:29:22.456 Latency(us) 00:29:22.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.456 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:22.456 nvme0n1 : 1.01 9567.45 37.37 0.00 0.00 13322.49 6911.07 25380.31 00:29:22.456 =================================================================================================================== 00:29:22.456 Total : 9567.45 37.37 0.00 0.00 13322.49 6911.07 25380.31 00:29:22.456 { 00:29:22.456 "results": [ 00:29:22.456 { 00:29:22.456 "job": "nvme0n1", 00:29:22.456 "core_mask": "0x2", 00:29:22.456 "workload": "randrw", 00:29:22.456 "percentage": 50, 00:29:22.456 "status": "finished", 00:29:22.456 "queue_depth": 128, 00:29:22.456 "io_size": 4096, 00:29:22.456 "runtime": 1.008942, 00:29:22.456 "iops": 9567.447881047672, 00:29:22.456 "mibps": 37.37284328534247, 00:29:22.456 "io_failed": 0, 00:29:22.456 "io_timeout": 0, 00:29:22.456 "avg_latency_us": 13322.48659013213, 00:29:22.456 "min_latency_us": 6911.069090909091, 00:29:22.456 "max_latency_us": 25380.305454545454 00:29:22.456 } 00:29:22.456 ], 00:29:22.456 "core_count": 1 00:29:22.456 } 00:29:22.715 09:07:00 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:22.715 09:07:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:22.974 09:07:00 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:29:22.974 09:07:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:22.974 09:07:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:22.974 09:07:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:22.974 09:07:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:22.974 09:07:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:23.233 09:07:01 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:23.233 09:07:01 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:29:23.233 09:07:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:23.233 09:07:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:23.233 09:07:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:23.233 09:07:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:23.233 09:07:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:23.492 09:07:01 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:29:23.492 09:07:01 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:23.492 09:07:01 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:23.492 09:07:01 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:23.492 09:07:01 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:23.492 09:07:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:23.492 09:07:01 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:23.492 09:07:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:23.492 09:07:01 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:23.492 09:07:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:23.751 [2024-09-28 09:07:01.567631] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:23.751 [2024-09-28 09:07:01.568586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:29:23.751 [2024-09-28 09:07:01.569558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:29:23.751 [2024-09-28 09:07:01.570551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:23.751 [2024-09-28 09:07:01.570595] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:23.751 [2024-09-28 09:07:01.570626] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:23.751 [2024-09-28 09:07:01.570645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:23.751 request: 00:29:23.751 { 00:29:23.751 "name": "nvme0", 00:29:23.751 "trtype": "tcp", 00:29:23.751 "traddr": "127.0.0.1", 00:29:23.751 "adrfam": "ipv4", 00:29:23.751 "trsvcid": "4420", 00:29:23.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:23.751 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:23.751 "prchk_reftag": false, 00:29:23.751 "prchk_guard": false, 00:29:23.751 "hdgst": false, 00:29:23.751 "ddgst": false, 00:29:23.751 "psk": "key1", 00:29:23.751 "allow_unrecognized_csi": false, 00:29:23.751 "method": "bdev_nvme_attach_controller", 00:29:23.751 "req_id": 1 00:29:23.751 } 00:29:23.751 Got JSON-RPC error response 00:29:23.751 response: 00:29:23.751 { 00:29:23.751 "code": -5, 00:29:23.751 "message": "Input/output error" 00:29:23.751 } 00:29:23.751 09:07:01 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:23.751 09:07:01 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:23.751 09:07:01 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:23.751 09:07:01 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:23.751 09:07:01 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:29:23.751 09:07:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:23.751 09:07:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:23.751 09:07:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:23.751 09:07:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:23.751 09:07:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:24.011 09:07:01 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:24.011 09:07:01 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:29:24.011 09:07:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:24.011 09:07:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:24.011 09:07:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:24.011 09:07:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:24.011 09:07:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:24.269 09:07:02 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:29:24.269 09:07:02 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:29:24.269 09:07:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:24.529 09:07:02 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:29:24.529 09:07:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:24.788 09:07:02 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:29:24.788 09:07:02 keyring_file -- keyring/file.sh@78 -- # jq length 00:29:24.788 09:07:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.046 09:07:02 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:29:25.046 09:07:02 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.WowwppkCJR 00:29:25.046 09:07:02 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.WowwppkCJR 00:29:25.046 09:07:02 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:25.046 09:07:02 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.WowwppkCJR 00:29:25.046 09:07:02 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:25.046 09:07:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.046 09:07:02 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:25.046 09:07:02 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.046 09:07:02 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WowwppkCJR 00:29:25.046 09:07:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WowwppkCJR 00:29:25.046 [2024-09-28 09:07:03.001460] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.WowwppkCJR': 0100660 00:29:25.046 [2024-09-28 09:07:03.001518] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:25.046 request: 00:29:25.046 { 00:29:25.046 "name": "key0", 00:29:25.046 "path": "/tmp/tmp.WowwppkCJR", 00:29:25.046 "method": "keyring_file_add_key", 00:29:25.046 "req_id": 1 00:29:25.046 } 00:29:25.046 Got JSON-RPC error response 00:29:25.046 response: 00:29:25.046 { 00:29:25.046 "code": -1, 00:29:25.046 "message": "Operation not permitted" 00:29:25.046 } 00:29:25.046 09:07:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:25.046 09:07:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:25.046 09:07:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:25.046 09:07:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:25.046 09:07:03 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.WowwppkCJR 00:29:25.046 09:07:03 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WowwppkCJR 00:29:25.046 09:07:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WowwppkCJR 00:29:25.304 09:07:03 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.WowwppkCJR 00:29:25.304 09:07:03 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:29:25.304 09:07:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:25.305 09:07:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:25.305 09:07:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:25.305 09:07:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:25.305 09:07:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:25.564 09:07:03 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:29:25.564 09:07:03 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:25.564 09:07:03 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:29:25.564 09:07:03 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:25.564 09:07:03 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:25.564 09:07:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.564 09:07:03 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:25.564 09:07:03 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:25.564 09:07:03 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:25.564 09:07:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:25.824 [2024-09-28 09:07:03.752257] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.WowwppkCJR': No such file or directory 00:29:25.824 [2024-09-28 09:07:03.752312] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:25.824 [2024-09-28 09:07:03.752338] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:25.824 [2024-09-28 09:07:03.752366] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:29:25.824 [2024-09-28 09:07:03.752385] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:25.824 [2024-09-28 09:07:03.752397] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:25.824 request: 00:29:25.824 { 00:29:25.824 "name": "nvme0", 00:29:25.824 "trtype": "tcp", 00:29:25.824 "traddr": "127.0.0.1", 00:29:25.824 "adrfam": "ipv4", 00:29:25.824 "trsvcid": "4420", 00:29:25.824 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:25.824 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:25.824 "prchk_reftag": false, 00:29:25.824 "prchk_guard": false, 00:29:25.824 "hdgst": false, 00:29:25.824 "ddgst": false, 00:29:25.824 "psk": "key0", 00:29:25.824 "allow_unrecognized_csi": false, 00:29:25.824 "method": "bdev_nvme_attach_controller", 00:29:25.824 "req_id": 1 00:29:25.824 } 00:29:25.824 Got JSON-RPC error response 00:29:25.824 response: 00:29:25.824 { 00:29:25.824 "code": -19, 00:29:25.824 "message": "No such device" 00:29:25.824 } 00:29:25.824 09:07:03 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:29:25.824 09:07:03 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:25.824 09:07:03 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:25.824 09:07:03 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:25.824 09:07:03 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:29:25.824 09:07:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:26.085 09:07:04 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:26.085 09:07:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:29:26.085 09:07:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:29:26.085 09:07:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:26.085 09:07:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:29:26.085 09:07:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:29:26.085 09:07:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.RKReepZHjX 00:29:26.085 09:07:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:26.085 09:07:04 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:26.085 09:07:04 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:29:26.085 09:07:04 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:29:26.085 09:07:04 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:29:26.085 09:07:04 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:29:26.085 09:07:04 keyring_file -- nvmf/common.sh@729 -- # python - 00:29:26.085 09:07:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.RKReepZHjX 00:29:26.085 09:07:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.RKReepZHjX 00:29:26.085 09:07:04 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.RKReepZHjX 00:29:26.085 09:07:04 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.RKReepZHjX 00:29:26.085 09:07:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.RKReepZHjX 00:29:26.345 09:07:04 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:26.345 09:07:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:26.604 nvme0n1 00:29:26.604 09:07:04 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:29:26.604 09:07:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:26.604 09:07:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:26.604 09:07:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:26.604 09:07:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:26.604 09:07:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:27.171 09:07:04 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:29:27.171 09:07:04 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:29:27.171 09:07:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:27.430 09:07:05 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:29:27.430 09:07:05 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:29:27.430 09:07:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:27.430 09:07:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:27.430 09:07:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:27.689 09:07:05 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:29:27.689 09:07:05 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:29:27.689 09:07:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:27.689 09:07:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:27.689 09:07:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:27.689 09:07:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:27.689 09:07:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:27.689 09:07:05 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:29:27.689 09:07:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:27.689 09:07:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:27.947 09:07:05 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:29:27.947 09:07:05 keyring_file -- keyring/file.sh@105 -- # jq length 00:29:27.947 09:07:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:28.206 09:07:06 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:29:28.206 09:07:06 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.RKReepZHjX 00:29:28.206 09:07:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.RKReepZHjX 00:29:28.464 09:07:06 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.BBwGQvg8uk 00:29:28.464 09:07:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.BBwGQvg8uk 00:29:28.722 09:07:06 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:28.722 09:07:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:28.981 nvme0n1 00:29:29.240 09:07:06 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:29:29.240 09:07:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:29.499 09:07:07 keyring_file -- keyring/file.sh@113 -- # config='{ 00:29:29.499 "subsystems": [ 00:29:29.499 { 00:29:29.499 "subsystem": "keyring", 00:29:29.499 "config": [ 00:29:29.499 { 00:29:29.499 "method": "keyring_file_add_key", 00:29:29.499 "params": { 00:29:29.499 "name": "key0", 00:29:29.499 "path": "/tmp/tmp.RKReepZHjX" 00:29:29.499 } 00:29:29.499 }, 00:29:29.499 { 00:29:29.499 "method": "keyring_file_add_key", 00:29:29.499 "params": { 00:29:29.499 "name": "key1", 00:29:29.499 "path": "/tmp/tmp.BBwGQvg8uk" 00:29:29.499 } 00:29:29.499 } 00:29:29.499 ] 00:29:29.499 }, 00:29:29.499 { 00:29:29.499 "subsystem": "iobuf", 00:29:29.499 "config": [ 00:29:29.499 { 00:29:29.499 "method": "iobuf_set_options", 00:29:29.499 "params": { 00:29:29.499 "small_pool_count": 8192, 00:29:29.499 "large_pool_count": 1024, 00:29:29.499 "small_bufsize": 8192, 00:29:29.499 "large_bufsize": 135168 00:29:29.499 } 00:29:29.499 } 00:29:29.499 ] 00:29:29.499 }, 00:29:29.499 { 00:29:29.499 "subsystem": "sock", 00:29:29.499 "config": [ 00:29:29.499 { 00:29:29.499 "method": "sock_set_default_impl", 00:29:29.499 "params": { 00:29:29.499 "impl_name": "uring" 00:29:29.499 } 00:29:29.499 }, 00:29:29.499 { 00:29:29.499 "method": "sock_impl_set_options", 00:29:29.499 "params": { 00:29:29.499 "impl_name": "ssl", 00:29:29.499 "recv_buf_size": 4096, 00:29:29.499 "send_buf_size": 4096, 00:29:29.499 "enable_recv_pipe": true, 00:29:29.499 "enable_quickack": false, 00:29:29.499 "enable_placement_id": 0, 00:29:29.499 "enable_zerocopy_send_server": true, 00:29:29.499 "enable_zerocopy_send_client": false, 00:29:29.499 "zerocopy_threshold": 0, 00:29:29.499 "tls_version": 0, 00:29:29.499 "enable_ktls": false 00:29:29.499 } 00:29:29.499 }, 00:29:29.499 { 00:29:29.499 "method": "sock_impl_set_options", 00:29:29.499 "params": { 00:29:29.499 "impl_name": "posix", 00:29:29.499 "recv_buf_size": 2097152, 00:29:29.499 "send_buf_size": 2097152, 00:29:29.499 "enable_recv_pipe": true, 00:29:29.499 "enable_quickack": false, 00:29:29.499 "enable_placement_id": 0, 00:29:29.499 "enable_zerocopy_send_server": true, 00:29:29.499 "enable_zerocopy_send_client": false, 00:29:29.499 "zerocopy_threshold": 0, 00:29:29.499 "tls_version": 0, 00:29:29.499 "enable_ktls": false 00:29:29.499 } 00:29:29.499 }, 00:29:29.499 { 00:29:29.499 "method": "sock_impl_set_options", 00:29:29.499 "params": { 00:29:29.499 "impl_name": "uring", 00:29:29.499 "recv_buf_size": 2097152, 00:29:29.499 "send_buf_size": 2097152, 00:29:29.499 "enable_recv_pipe": true, 00:29:29.499 "enable_quickack": false, 00:29:29.499 "enable_placement_id": 0, 00:29:29.499 "enable_zerocopy_send_server": false, 00:29:29.499 "enable_zerocopy_send_client": false, 00:29:29.499 "zerocopy_threshold": 0, 00:29:29.499 "tls_version": 0, 00:29:29.499 "enable_ktls": false 00:29:29.499 } 00:29:29.499 } 00:29:29.499 ] 00:29:29.499 }, 00:29:29.499 { 00:29:29.499 "subsystem": "vmd", 00:29:29.499 "config": [] 00:29:29.499 }, 00:29:29.499 { 00:29:29.499 "subsystem": "accel", 00:29:29.499 "config": [ 00:29:29.499 { 00:29:29.499 "method": "accel_set_options", 00:29:29.499 "params": { 00:29:29.499 "small_cache_size": 128, 00:29:29.499 "large_cache_size": 16, 00:29:29.499 "task_count": 2048, 00:29:29.499 "sequence_count": 2048, 00:29:29.499 "buf_count": 2048 00:29:29.499 } 00:29:29.499 } 00:29:29.499 ] 00:29:29.499 }, 00:29:29.499 { 00:29:29.499 "subsystem": "bdev", 00:29:29.499 "config": [ 00:29:29.499 { 00:29:29.499 "method": "bdev_set_options", 00:29:29.499 "params": { 00:29:29.499 "bdev_io_pool_size": 65535, 00:29:29.499 "bdev_io_cache_size": 256, 00:29:29.499 "bdev_auto_examine": true, 00:29:29.499 "iobuf_small_cache_size": 128, 00:29:29.499 "iobuf_large_cache_size": 16 00:29:29.499 } 00:29:29.499 }, 00:29:29.499 { 00:29:29.499 "method": "bdev_raid_set_options", 00:29:29.499 "params": { 00:29:29.499 "process_window_size_kb": 1024, 00:29:29.499 "process_max_bandwidth_mb_sec": 0 00:29:29.499 } 00:29:29.499 }, 00:29:29.499 { 00:29:29.499 "method": "bdev_iscsi_set_options", 00:29:29.499 "params": { 00:29:29.499 "timeout_sec": 30 00:29:29.499 } 00:29:29.499 }, 00:29:29.499 { 00:29:29.499 "method": "bdev_nvme_set_options", 00:29:29.499 "params": { 00:29:29.499 "action_on_timeout": "none", 00:29:29.499 "timeout_us": 0, 00:29:29.499 "timeout_admin_us": 0, 00:29:29.499 "keep_alive_timeout_ms": 10000, 00:29:29.499 "arbitration_burst": 0, 00:29:29.499 "low_priority_weight": 0, 00:29:29.499 "medium_priority_weight": 0, 00:29:29.499 "high_priority_weight": 0, 00:29:29.499 "nvme_adminq_poll_period_us": 10000, 00:29:29.499 "nvme_ioq_poll_period_us": 0, 00:29:29.500 "io_queue_requests": 512, 00:29:29.500 "delay_cmd_submit": true, 00:29:29.500 "transport_retry_count": 4, 00:29:29.500 "bdev_retry_count": 3, 00:29:29.500 "transport_ack_timeout": 0, 00:29:29.500 "ctrlr_loss_timeout_sec": 0, 00:29:29.500 "reconnect_delay_sec": 0, 00:29:29.500 "fast_io_fail_timeout_sec": 0, 00:29:29.500 "disable_auto_failback": false, 00:29:29.500 "generate_uuids": false, 00:29:29.500 "transport_tos": 0, 00:29:29.500 "nvme_error_stat": false, 00:29:29.500 "rdma_srq_size": 0, 00:29:29.500 "io_path_stat": false, 00:29:29.500 "allow_accel_sequence": false, 00:29:29.500 "rdma_max_cq_size": 0, 00:29:29.500 "rdma_cm_event_timeout_ms": 0, 00:29:29.500 "dhchap_digests": [ 00:29:29.500 "sha256", 00:29:29.500 "sha384", 00:29:29.500 "sha512" 00:29:29.500 ], 00:29:29.500 "dhchap_dhgroups": [ 00:29:29.500 "null", 00:29:29.500 "ffdhe2048", 00:29:29.500 "ffdhe3072", 00:29:29.500 "ffdhe4096", 00:29:29.500 "ffdhe6144", 00:29:29.500 "ffdhe8192" 00:29:29.500 ] 00:29:29.500 } 00:29:29.500 }, 00:29:29.500 { 00:29:29.500 "method": "bdev_nvme_attach_controller", 00:29:29.500 "params": { 00:29:29.500 "name": "nvme0", 00:29:29.500 "trtype": "TCP", 00:29:29.500 "adrfam": "IPv4", 00:29:29.500 "traddr": "127.0.0.1", 00:29:29.500 "trsvcid": "4420", 00:29:29.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:29.500 "prchk_reftag": false, 00:29:29.500 "prchk_guard": false, 00:29:29.500 "ctrlr_loss_timeout_sec": 0, 00:29:29.500 "reconnect_delay_sec": 0, 00:29:29.500 "fast_io_fail_timeout_sec": 0, 00:29:29.500 "psk": "key0", 00:29:29.500 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:29.500 "hdgst": false, 00:29:29.500 "ddgst": false 00:29:29.500 } 00:29:29.500 }, 00:29:29.500 { 00:29:29.500 "method": "bdev_nvme_set_hotplug", 00:29:29.500 "params": { 00:29:29.500 "period_us": 100000, 00:29:29.500 "enable": false 00:29:29.500 } 00:29:29.500 }, 00:29:29.500 { 00:29:29.500 "method": "bdev_wait_for_examine" 00:29:29.500 } 00:29:29.500 ] 00:29:29.500 }, 00:29:29.500 { 00:29:29.500 "subsystem": "nbd", 00:29:29.500 "config": [] 00:29:29.500 } 00:29:29.500 ] 00:29:29.500 }' 00:29:29.500 09:07:07 keyring_file -- keyring/file.sh@115 -- # killprocess 91728 00:29:29.500 09:07:07 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 91728 ']' 00:29:29.500 09:07:07 keyring_file -- common/autotest_common.sh@954 -- # kill -0 91728 00:29:29.500 09:07:07 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:29.500 09:07:07 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:29.500 09:07:07 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91728 00:29:29.500 09:07:07 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:29.500 09:07:07 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:29.500 killing process with pid 91728 00:29:29.500 09:07:07 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91728' 00:29:29.500 09:07:07 keyring_file -- common/autotest_common.sh@969 -- # kill 91728 00:29:29.500 Received shutdown signal, test time was about 1.000000 seconds 00:29:29.500 00:29:29.500 Latency(us) 00:29:29.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.500 =================================================================================================================== 00:29:29.500 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.500 09:07:07 keyring_file -- common/autotest_common.sh@974 -- # wait 91728 00:29:30.453 09:07:08 keyring_file -- keyring/file.sh@118 -- # bperfpid=91981 00:29:30.453 09:07:08 keyring_file -- keyring/file.sh@120 -- # waitforlisten 91981 /var/tmp/bperf.sock 00:29:30.453 09:07:08 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 91981 ']' 00:29:30.453 09:07:08 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:30.453 09:07:08 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:30.453 09:07:08 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:30.453 09:07:08 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:29:30.453 "subsystems": [ 00:29:30.453 { 00:29:30.453 "subsystem": "keyring", 00:29:30.453 "config": [ 00:29:30.453 { 00:29:30.453 "method": "keyring_file_add_key", 00:29:30.453 "params": { 00:29:30.453 "name": "key0", 00:29:30.453 "path": "/tmp/tmp.RKReepZHjX" 00:29:30.453 } 00:29:30.453 }, 00:29:30.453 { 00:29:30.453 "method": "keyring_file_add_key", 00:29:30.453 "params": { 00:29:30.453 "name": "key1", 00:29:30.453 "path": "/tmp/tmp.BBwGQvg8uk" 00:29:30.453 } 00:29:30.453 } 00:29:30.453 ] 00:29:30.453 }, 00:29:30.453 { 00:29:30.453 "subsystem": "iobuf", 00:29:30.453 "config": [ 00:29:30.453 { 00:29:30.453 "method": "iobuf_set_options", 00:29:30.453 "params": { 00:29:30.453 "small_pool_count": 8192, 00:29:30.453 "large_pool_count": 1024, 00:29:30.453 "small_bufsize": 8192, 00:29:30.453 "large_bufsize": 135168 00:29:30.453 } 00:29:30.453 } 00:29:30.453 ] 00:29:30.453 }, 00:29:30.453 { 00:29:30.453 "subsystem": "sock", 00:29:30.453 "config": [ 00:29:30.453 { 00:29:30.453 "method": "sock_set_default_impl", 00:29:30.453 "params": { 00:29:30.453 "impl_name": "uring" 00:29:30.453 } 00:29:30.453 }, 00:29:30.453 { 00:29:30.453 "method": "sock_impl_set_options", 00:29:30.453 "params": { 00:29:30.453 "impl_name": "ssl", 00:29:30.453 "recv_buf_size": 4096, 00:29:30.453 "send_buf_size": 4096, 00:29:30.453 "enable_recv_pipe": true, 00:29:30.453 "enable_quickack": false, 00:29:30.453 "enable_placement_id": 0, 00:29:30.453 "enable_zerocopy_send_server": true, 00:29:30.453 "enable_zerocopy_send_client": false, 00:29:30.453 "zerocopy_threshold": 0, 00:29:30.453 "tls_version": 0, 00:29:30.453 "enable_ktls": false 00:29:30.453 } 00:29:30.453 }, 00:29:30.453 { 00:29:30.453 "method": "sock_impl_set_options", 00:29:30.453 "params": { 00:29:30.453 "impl_name": "posix", 00:29:30.453 "recv_buf_size": 2097152, 00:29:30.453 "send_buf_size": 2097152, 00:29:30.453 "enable_recv_pipe": true, 00:29:30.453 "enable_quickack": false, 00:29:30.453 "enable_placement_id": 0, 00:29:30.453 "enable_zerocopy_send_server": true, 00:29:30.453 "enable_zerocopy_send_client": false, 00:29:30.453 "zerocopy_threshold": 0, 00:29:30.453 "tls_version": 0, 00:29:30.453 "enable_ktls": false 00:29:30.453 } 00:29:30.453 }, 00:29:30.453 { 00:29:30.453 "method": "sock_impl_set_options", 00:29:30.453 "params": { 00:29:30.453 "impl_name": "uring", 00:29:30.453 "recv_buf_size": 2097152, 00:29:30.453 "send_buf_size": 2097152, 00:29:30.453 "enable_recv_pipe": true, 00:29:30.453 "enable_quickack": false, 00:29:30.453 "enable_placement_id": 0, 00:29:30.454 "enable_zerocopy_send_server": false, 00:29:30.454 "enable_zerocopy_send_client": false, 00:29:30.454 "zerocopy_threshold": 0, 00:29:30.454 "tls_version": 0, 00:29:30.454 "enable_ktls": false 00:29:30.454 } 00:29:30.454 } 00:29:30.454 ] 00:29:30.454 }, 00:29:30.454 { 00:29:30.454 "subsystem": "vmd", 00:29:30.454 "config": [] 00:29:30.454 }, 00:29:30.454 { 00:29:30.454 "subsystem": "accel", 00:29:30.454 "config": [ 00:29:30.454 { 00:29:30.454 "method": "accel_set_options", 00:29:30.454 "params": { 00:29:30.454 "small_cache_size": 128, 00:29:30.454 "large_cache_size": 16, 00:29:30.454 "task_count": 2048, 00:29:30.454 "sequence_count": 2048, 00:29:30.454 "buf_count": 2048 00:29:30.454 } 00:29:30.454 } 00:29:30.454 ] 00:29:30.454 }, 00:29:30.454 { 00:29:30.454 "subsystem": "bdev", 00:29:30.454 "config": [ 00:29:30.454 { 00:29:30.454 "method": "bdev_set_options", 00:29:30.454 "params": { 00:29:30.454 "bdev_io_pool_size": 65535, 00:29:30.454 "bdev_io_cache_size": 256, 00:29:30.454 "bdev_auto_examine": true, 00:29:30.454 "iobuf_small_cache_size": 128, 00:29:30.454 "iobuf_large_cache_size": 16 00:29:30.454 } 00:29:30.454 }, 00:29:30.454 { 00:29:30.454 "method": "bdev_raid_set_options", 00:29:30.454 "params": { 00:29:30.454 "process_window_size_kb": 1024, 00:29:30.454 "process_max_bandwidth_mb_sec": 0 00:29:30.454 } 00:29:30.454 }, 00:29:30.454 { 00:29:30.454 "method": "bdev_iscsi_set_options", 00:29:30.454 "params": { 00:29:30.454 "timeout_sec": 30 00:29:30.454 } 00:29:30.454 }, 00:29:30.454 { 00:29:30.454 "method": "bdev_nvme_set_options", 00:29:30.454 "params": { 00:29:30.454 "action_on_timeout": "none", 00:29:30.454 "timeout_us": 0, 00:29:30.454 "timeout_admin_us": 0, 00:29:30.454 "keep_alive_timeout_ms": 10000, 00:29:30.454 "arbitration_burst": 0, 00:29:30.454 "low_priority_weight": 0, 00:29:30.454 "medium_priority_weight": 0, 00:29:30.454 "high_priority_weight": 0, 00:29:30.454 "nvme_adminq_poll_period_us": 10000, 00:29:30.454 "nvme_ioq_poll_period_us": 0, 00:29:30.454 09:07:08 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:30.454 "io_queue_requests": 512, 00:29:30.454 "delay_cmd_submit": true, 00:29:30.454 "transport_retry_count": 4, 00:29:30.454 "bdev_retry_count": 3, 00:29:30.454 "transport_ack_timeout": 0, 00:29:30.454 "ctrlr_loss_timeout_sec": 0, 00:29:30.454 "reconnect_delay_sec": 0, 00:29:30.454 "fast_io_fail_timeout_sec": 0, 00:29:30.454 "disable_auto_failback": false, 00:29:30.454 "generate_uuids": false, 00:29:30.454 "transport_tos": 0, 00:29:30.454 "nvme_error_stat": false, 00:29:30.454 "rdma_srq_size": 0, 00:29:30.454 "io_path_stat": false, 00:29:30.454 "allow_accel_sequence": false, 00:29:30.454 "rdma_max_cq_size": 0, 00:29:30.454 "rdma_cm_event_timeout_ms": 0, 00:29:30.454 "dhchap_digests": [ 00:29:30.454 "sha256", 00:29:30.454 "sha384", 00:29:30.454 "sha512" 00:29:30.454 ], 00:29:30.454 "dhchap_dhgroups": [ 00:29:30.454 "null", 00:29:30.454 "ffdhe2048", 00:29:30.454 "ffdhe3072", 00:29:30.454 "ffdhe4096", 00:29:30.454 "ffdhe6144", 00:29:30.454 "ffdhe8192" 00:29:30.454 ] 00:29:30.454 } 00:29:30.454 }, 00:29:30.454 { 00:29:30.454 "method": "bdev_nvme_attach_controller", 00:29:30.454 "params": { 00:29:30.454 "name": "nvme0", 00:29:30.454 "trtype": "TCP", 00:29:30.454 "adrfam": "IPv4", 00:29:30.454 "traddr": "127.0.0.1", 00:29:30.454 "trsvcid": "4420", 00:29:30.454 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:30.454 "prchk_reftag": false, 00:29:30.454 "prchk_guard": false, 00:29:30.454 "ctrlr_loss_timeout_sec": 0, 00:29:30.454 "reconnect_delay_sec": 0, 00:29:30.454 "fast_io_fail_timeout_sec": 0, 00:29:30.454 "psk": "key0", 00:29:30.454 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:30.454 "hdgst": false, 00:29:30.454 "ddgst": false 00:29:30.454 } 00:29:30.454 }, 00:29:30.454 { 00:29:30.454 "method": "bdev_nvme_set_hotplug", 00:29:30.454 "params": { 00:29:30.454 "period_us": 100000, 00:29:30.454 "enable": false 00:29:30.454 } 00:29:30.454 }, 00:29:30.454 { 00:29:30.454 "method": "bdev_wait_for_examine" 00:29:30.454 } 00:29:30.454 ] 00:29:30.454 }, 00:29:30.454 { 00:29:30.454 "subsystem": "nbd", 00:29:30.454 "config": [] 00:29:30.454 } 00:29:30.454 ] 00:29:30.454 }' 00:29:30.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:30.454 09:07:08 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:30.454 09:07:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:30.454 [2024-09-28 09:07:08.326798] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:29:30.454 [2024-09-28 09:07:08.327009] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91981 ] 00:29:30.722 [2024-09-28 09:07:08.494117] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.722 [2024-09-28 09:07:08.649102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.980 [2024-09-28 09:07:08.882122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:31.239 [2024-09-28 09:07:08.989986] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:31.497 09:07:09 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:31.497 09:07:09 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:29:31.497 09:07:09 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:29:31.497 09:07:09 keyring_file -- keyring/file.sh@121 -- # jq length 00:29:31.497 09:07:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:31.756 09:07:09 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:31.756 09:07:09 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:29:31.756 09:07:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:31.756 09:07:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:31.756 09:07:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:31.756 09:07:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:31.756 09:07:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:32.015 09:07:09 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:29:32.015 09:07:09 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:29:32.015 09:07:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:32.015 09:07:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:32.015 09:07:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:32.015 09:07:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:32.015 09:07:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:32.015 09:07:09 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:29:32.015 09:07:09 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:29:32.015 09:07:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:32.015 09:07:09 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:29:32.583 09:07:10 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:29:32.583 09:07:10 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:32.583 09:07:10 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.RKReepZHjX /tmp/tmp.BBwGQvg8uk 00:29:32.583 09:07:10 keyring_file -- keyring/file.sh@20 -- # killprocess 91981 00:29:32.583 09:07:10 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 91981 ']' 00:29:32.583 09:07:10 keyring_file -- common/autotest_common.sh@954 -- # kill -0 91981 00:29:32.583 09:07:10 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:32.583 09:07:10 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:32.583 09:07:10 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91981 00:29:32.583 09:07:10 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:32.584 09:07:10 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:32.584 killing process with pid 91981 00:29:32.584 09:07:10 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91981' 00:29:32.584 09:07:10 keyring_file -- common/autotest_common.sh@969 -- # kill 91981 00:29:32.584 Received shutdown signal, test time was about 1.000000 seconds 00:29:32.584 00:29:32.584 Latency(us) 00:29:32.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.584 =================================================================================================================== 00:29:32.584 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:32.584 09:07:10 keyring_file -- common/autotest_common.sh@974 -- # wait 91981 00:29:33.519 09:07:11 keyring_file -- keyring/file.sh@21 -- # killprocess 91711 00:29:33.519 09:07:11 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 91711 ']' 00:29:33.519 09:07:11 keyring_file -- common/autotest_common.sh@954 -- # kill -0 91711 00:29:33.519 09:07:11 keyring_file -- common/autotest_common.sh@955 -- # uname 00:29:33.519 09:07:11 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:33.519 09:07:11 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91711 00:29:33.519 09:07:11 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:33.519 09:07:11 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:33.519 killing process with pid 91711 00:29:33.520 09:07:11 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91711' 00:29:33.520 09:07:11 keyring_file -- common/autotest_common.sh@969 -- # kill 91711 00:29:33.520 09:07:11 keyring_file -- common/autotest_common.sh@974 -- # wait 91711 00:29:35.426 00:29:35.426 real 0m18.768s 00:29:35.426 user 0m43.680s 00:29:35.426 sys 0m2.999s 00:29:35.426 09:07:13 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:35.426 09:07:13 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:35.426 ************************************ 00:29:35.426 END TEST keyring_file 00:29:35.426 ************************************ 00:29:35.426 09:07:13 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:29:35.426 09:07:13 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:35.426 09:07:13 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:35.426 09:07:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:35.426 09:07:13 -- common/autotest_common.sh@10 -- # set +x 00:29:35.426 ************************************ 00:29:35.426 START TEST keyring_linux 00:29:35.426 ************************************ 00:29:35.426 09:07:13 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:35.426 Joined session keyring: 108490576 00:29:35.426 * Looking for test storage... 00:29:35.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:35.426 09:07:13 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:35.427 09:07:13 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:29:35.427 09:07:13 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:35.427 09:07:13 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@345 -- # : 1 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@368 -- # return 0 00:29:35.427 09:07:13 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:35.427 09:07:13 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:35.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.427 --rc genhtml_branch_coverage=1 00:29:35.427 --rc genhtml_function_coverage=1 00:29:35.427 --rc genhtml_legend=1 00:29:35.427 --rc geninfo_all_blocks=1 00:29:35.427 --rc geninfo_unexecuted_blocks=1 00:29:35.427 00:29:35.427 ' 00:29:35.427 09:07:13 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:35.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.427 --rc genhtml_branch_coverage=1 00:29:35.427 --rc genhtml_function_coverage=1 00:29:35.427 --rc genhtml_legend=1 00:29:35.427 --rc geninfo_all_blocks=1 00:29:35.427 --rc geninfo_unexecuted_blocks=1 00:29:35.427 00:29:35.427 ' 00:29:35.427 09:07:13 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:35.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.427 --rc genhtml_branch_coverage=1 00:29:35.427 --rc genhtml_function_coverage=1 00:29:35.427 --rc genhtml_legend=1 00:29:35.427 --rc geninfo_all_blocks=1 00:29:35.427 --rc geninfo_unexecuted_blocks=1 00:29:35.427 00:29:35.427 ' 00:29:35.427 09:07:13 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:35.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:35.427 --rc genhtml_branch_coverage=1 00:29:35.427 --rc genhtml_function_coverage=1 00:29:35.427 --rc genhtml_legend=1 00:29:35.427 --rc geninfo_all_blocks=1 00:29:35.427 --rc geninfo_unexecuted_blocks=1 00:29:35.427 00:29:35.427 ' 00:29:35.427 09:07:13 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:b09210cb-7022-43fe-9129-03e098f7a403 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=b09210cb-7022-43fe-9129-03e098f7a403 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:35.427 09:07:13 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:35.427 09:07:13 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.427 09:07:13 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.427 09:07:13 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.427 09:07:13 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:35.427 09:07:13 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:35.427 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:35.427 09:07:13 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:35.427 09:07:13 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:35.427 09:07:13 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:35.427 09:07:13 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:35.427 09:07:13 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:35.427 09:07:13 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:29:35.427 09:07:13 keyring_linux -- nvmf/common.sh@729 -- # python - 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:35.427 /tmp/:spdk-test:key0 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:35.427 09:07:13 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:35.427 09:07:13 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:35.428 09:07:13 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:35.428 09:07:13 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:35.428 09:07:13 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:29:35.428 09:07:13 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:29:35.428 09:07:13 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:29:35.428 09:07:13 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:29:35.428 09:07:13 keyring_linux -- nvmf/common.sh@729 -- # python - 00:29:35.428 09:07:13 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:35.428 /tmp/:spdk-test:key1 00:29:35.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.428 09:07:13 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:35.428 09:07:13 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=92127 00:29:35.428 09:07:13 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:35.428 09:07:13 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 92127 00:29:35.428 09:07:13 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 92127 ']' 00:29:35.428 09:07:13 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.428 09:07:13 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:35.428 09:07:13 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.428 09:07:13 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:35.428 09:07:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:35.687 [2024-09-28 09:07:13.520721] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:29:35.687 [2024-09-28 09:07:13.521183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92127 ] 00:29:35.946 [2024-09-28 09:07:13.690290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.946 [2024-09-28 09:07:13.836935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.206 [2024-09-28 09:07:14.017616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:36.774 09:07:14 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:36.774 09:07:14 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:29:36.774 09:07:14 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:36.774 09:07:14 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.774 09:07:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:36.774 [2024-09-28 09:07:14.475039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.774 null0 00:29:36.774 [2024-09-28 09:07:14.507011] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:36.774 [2024-09-28 09:07:14.507294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:36.774 09:07:14 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.774 09:07:14 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:36.774 235312427 00:29:36.774 09:07:14 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:36.774 304822771 00:29:36.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:36.774 09:07:14 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=92145 00:29:36.774 09:07:14 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:36.774 09:07:14 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 92145 /var/tmp/bperf.sock 00:29:36.774 09:07:14 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 92145 ']' 00:29:36.774 09:07:14 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:36.774 09:07:14 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:36.774 09:07:14 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:36.774 09:07:14 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:36.774 09:07:14 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:36.774 [2024-09-28 09:07:14.645978] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:29:36.774 [2024-09-28 09:07:14.646424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92145 ] 00:29:37.033 [2024-09-28 09:07:14.814110] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.033 [2024-09-28 09:07:14.964845] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.968 09:07:15 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:37.968 09:07:15 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:29:37.968 09:07:15 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:37.968 09:07:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:37.968 09:07:15 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:37.968 09:07:15 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:38.227 [2024-09-28 09:07:16.151266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:38.486 09:07:16 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:38.486 09:07:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:38.745 [2024-09-28 09:07:16.500586] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:38.745 nvme0n1 00:29:38.745 09:07:16 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:38.746 09:07:16 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:38.746 09:07:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:38.746 09:07:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:38.746 09:07:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:38.746 09:07:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:39.005 09:07:16 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:39.005 09:07:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:39.005 09:07:16 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:39.005 09:07:16 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:39.005 09:07:16 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:39.005 09:07:16 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:39.005 09:07:16 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:39.264 09:07:17 keyring_linux -- keyring/linux.sh@25 -- # sn=235312427 00:29:39.264 09:07:17 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:39.264 09:07:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:39.264 09:07:17 keyring_linux -- keyring/linux.sh@26 -- # [[ 235312427 == \2\3\5\3\1\2\4\2\7 ]] 00:29:39.264 09:07:17 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 235312427 00:29:39.264 09:07:17 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:39.264 09:07:17 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:39.264 Running I/O for 1 seconds... 00:29:40.645 10615.00 IOPS, 41.46 MiB/s 00:29:40.645 Latency(us) 00:29:40.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.645 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:40.645 nvme0n1 : 1.01 10624.81 41.50 0.00 0.00 11971.21 3813.00 15847.80 00:29:40.645 =================================================================================================================== 00:29:40.645 Total : 10624.81 41.50 0.00 0.00 11971.21 3813.00 15847.80 00:29:40.645 { 00:29:40.645 "results": [ 00:29:40.645 { 00:29:40.645 "job": "nvme0n1", 00:29:40.645 "core_mask": "0x2", 00:29:40.645 "workload": "randread", 00:29:40.645 "status": "finished", 00:29:40.645 "queue_depth": 128, 00:29:40.645 "io_size": 4096, 00:29:40.645 "runtime": 1.011218, 00:29:40.645 "iops": 10624.810871641921, 00:29:40.645 "mibps": 41.503167467351254, 00:29:40.645 "io_failed": 0, 00:29:40.645 "io_timeout": 0, 00:29:40.645 "avg_latency_us": 11971.208101265824, 00:29:40.645 "min_latency_us": 3813.0036363636364, 00:29:40.645 "max_latency_us": 15847.796363636364 00:29:40.645 } 00:29:40.645 ], 00:29:40.645 "core_count": 1 00:29:40.645 } 00:29:40.645 09:07:18 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:40.645 09:07:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:40.645 09:07:18 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:40.645 09:07:18 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:40.645 09:07:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:40.645 09:07:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:40.645 09:07:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:40.645 09:07:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:40.904 09:07:18 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:40.904 09:07:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:40.904 09:07:18 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:40.904 09:07:18 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:40.904 09:07:18 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:29:40.904 09:07:18 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:40.904 09:07:18 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:29:40.904 09:07:18 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:40.904 09:07:18 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:29:40.904 09:07:18 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:40.904 09:07:18 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:40.905 09:07:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:41.164 [2024-09-28 09:07:19.014953] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:41.164 [2024-09-28 09:07:19.015244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:29:41.164 [2024-09-28 09:07:19.016245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:29:41.164 [2024-09-28 09:07:19.017221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:41.164 [2024-09-28 09:07:19.017433] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:41.164 [2024-09-28 09:07:19.017473] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:41.164 [2024-09-28 09:07:19.017490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:41.164 request: 00:29:41.164 { 00:29:41.164 "name": "nvme0", 00:29:41.164 "trtype": "tcp", 00:29:41.164 "traddr": "127.0.0.1", 00:29:41.164 "adrfam": "ipv4", 00:29:41.164 "trsvcid": "4420", 00:29:41.164 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:41.164 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:41.164 "prchk_reftag": false, 00:29:41.164 "prchk_guard": false, 00:29:41.164 "hdgst": false, 00:29:41.164 "ddgst": false, 00:29:41.164 "psk": ":spdk-test:key1", 00:29:41.164 "allow_unrecognized_csi": false, 00:29:41.164 "method": "bdev_nvme_attach_controller", 00:29:41.164 "req_id": 1 00:29:41.164 } 00:29:41.164 Got JSON-RPC error response 00:29:41.164 response: 00:29:41.164 { 00:29:41.164 "code": -5, 00:29:41.164 "message": "Input/output error" 00:29:41.164 } 00:29:41.164 09:07:19 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:29:41.164 09:07:19 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:41.164 09:07:19 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:41.164 09:07:19 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@33 -- # sn=235312427 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 235312427 00:29:41.164 1 links removed 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@33 -- # sn=304822771 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 304822771 00:29:41.164 1 links removed 00:29:41.164 09:07:19 keyring_linux -- keyring/linux.sh@41 -- # killprocess 92145 00:29:41.164 09:07:19 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 92145 ']' 00:29:41.164 09:07:19 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 92145 00:29:41.164 09:07:19 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:29:41.164 09:07:19 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:41.164 09:07:19 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92145 00:29:41.164 killing process with pid 92145 00:29:41.164 Received shutdown signal, test time was about 1.000000 seconds 00:29:41.164 00:29:41.164 Latency(us) 00:29:41.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.165 =================================================================================================================== 00:29:41.165 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:41.165 09:07:19 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:41.165 09:07:19 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:41.165 09:07:19 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92145' 00:29:41.165 09:07:19 keyring_linux -- common/autotest_common.sh@969 -- # kill 92145 00:29:41.165 09:07:19 keyring_linux -- common/autotest_common.sh@974 -- # wait 92145 00:29:42.103 09:07:19 keyring_linux -- keyring/linux.sh@42 -- # killprocess 92127 00:29:42.103 09:07:19 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 92127 ']' 00:29:42.103 09:07:19 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 92127 00:29:42.103 09:07:19 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:29:42.103 09:07:19 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:42.103 09:07:19 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92127 00:29:42.103 killing process with pid 92127 00:29:42.103 09:07:19 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:42.103 09:07:19 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:42.103 09:07:19 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92127' 00:29:42.103 09:07:19 keyring_linux -- common/autotest_common.sh@969 -- # kill 92127 00:29:42.103 09:07:19 keyring_linux -- common/autotest_common.sh@974 -- # wait 92127 00:29:44.009 ************************************ 00:29:44.009 END TEST keyring_linux 00:29:44.009 ************************************ 00:29:44.009 00:29:44.009 real 0m8.624s 00:29:44.009 user 0m15.460s 00:29:44.009 sys 0m1.539s 00:29:44.009 09:07:21 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:44.010 09:07:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:44.010 09:07:21 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:29:44.010 09:07:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:44.010 09:07:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:44.010 09:07:21 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:44.010 09:07:21 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:29:44.010 09:07:21 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:29:44.010 09:07:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:44.010 09:07:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:44.010 09:07:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:44.010 09:07:21 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:44.010 09:07:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:44.010 09:07:21 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:29:44.010 09:07:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:44.010 09:07:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:44.010 09:07:21 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:29:44.010 09:07:21 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:29:44.010 09:07:21 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:29:44.010 09:07:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:44.010 09:07:21 -- common/autotest_common.sh@10 -- # set +x 00:29:44.010 09:07:21 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:29:44.010 09:07:21 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:44.010 09:07:21 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:44.010 09:07:21 -- common/autotest_common.sh@10 -- # set +x 00:29:45.916 INFO: APP EXITING 00:29:45.916 INFO: killing all VMs 00:29:45.916 INFO: killing vhost app 00:29:45.916 INFO: EXIT DONE 00:29:46.484 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:46.484 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:46.484 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:47.053 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:47.312 Cleaning 00:29:47.312 Removing: /var/run/dpdk/spdk0/config 00:29:47.312 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:47.312 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:47.312 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:47.312 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:47.312 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:47.312 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:47.312 Removing: /var/run/dpdk/spdk1/config 00:29:47.312 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:47.312 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:47.312 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:47.312 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:47.312 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:47.312 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:47.312 Removing: /var/run/dpdk/spdk2/config 00:29:47.312 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:47.312 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:47.312 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:47.312 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:47.312 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:47.312 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:47.312 Removing: /var/run/dpdk/spdk3/config 00:29:47.312 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:47.312 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:47.312 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:47.312 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:47.312 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:47.312 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:47.312 Removing: /var/run/dpdk/spdk4/config 00:29:47.312 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:47.312 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:47.312 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:47.312 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:47.312 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:47.312 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:47.312 Removing: /dev/shm/nvmf_trace.0 00:29:47.312 Removing: /dev/shm/spdk_tgt_trace.pid57399 00:29:47.312 Removing: /var/run/dpdk/spdk0 00:29:47.312 Removing: /var/run/dpdk/spdk1 00:29:47.312 Removing: /var/run/dpdk/spdk2 00:29:47.312 Removing: /var/run/dpdk/spdk3 00:29:47.312 Removing: /var/run/dpdk/spdk4 00:29:47.312 Removing: /var/run/dpdk/spdk_pid57186 00:29:47.312 Removing: /var/run/dpdk/spdk_pid57399 00:29:47.312 Removing: /var/run/dpdk/spdk_pid57623 00:29:47.312 Removing: /var/run/dpdk/spdk_pid57721 00:29:47.312 Removing: /var/run/dpdk/spdk_pid57771 00:29:47.312 Removing: /var/run/dpdk/spdk_pid57900 00:29:47.312 Removing: /var/run/dpdk/spdk_pid57918 00:29:47.312 Removing: /var/run/dpdk/spdk_pid58077 00:29:47.312 Removing: /var/run/dpdk/spdk_pid58291 00:29:47.312 Removing: /var/run/dpdk/spdk_pid58457 00:29:47.312 Removing: /var/run/dpdk/spdk_pid58562 00:29:47.312 Removing: /var/run/dpdk/spdk_pid58658 00:29:47.312 Removing: /var/run/dpdk/spdk_pid58780 00:29:47.312 Removing: /var/run/dpdk/spdk_pid58877 00:29:47.312 Removing: /var/run/dpdk/spdk_pid58922 00:29:47.312 Removing: /var/run/dpdk/spdk_pid58964 00:29:47.312 Removing: /var/run/dpdk/spdk_pid59035 00:29:47.312 Removing: /var/run/dpdk/spdk_pid59152 00:29:47.312 Removing: /var/run/dpdk/spdk_pid59610 00:29:47.312 Removing: /var/run/dpdk/spdk_pid59685 00:29:47.312 Removing: /var/run/dpdk/spdk_pid59748 00:29:47.312 Removing: /var/run/dpdk/spdk_pid59765 00:29:47.312 Removing: /var/run/dpdk/spdk_pid59914 00:29:47.312 Removing: /var/run/dpdk/spdk_pid59931 00:29:47.312 Removing: /var/run/dpdk/spdk_pid60088 00:29:47.312 Removing: /var/run/dpdk/spdk_pid60110 00:29:47.312 Removing: /var/run/dpdk/spdk_pid60174 00:29:47.572 Removing: /var/run/dpdk/spdk_pid60192 00:29:47.572 Removing: /var/run/dpdk/spdk_pid60256 00:29:47.572 Removing: /var/run/dpdk/spdk_pid60274 00:29:47.572 Removing: /var/run/dpdk/spdk_pid60456 00:29:47.572 Removing: /var/run/dpdk/spdk_pid60493 00:29:47.572 Removing: /var/run/dpdk/spdk_pid60582 00:29:47.572 Removing: /var/run/dpdk/spdk_pid60939 00:29:47.572 Removing: /var/run/dpdk/spdk_pid60952 00:29:47.572 Removing: /var/run/dpdk/spdk_pid60995 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61026 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61054 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61085 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61110 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61143 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61174 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61200 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61227 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61264 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61289 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61317 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61348 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61379 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61401 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61432 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61463 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61492 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61533 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61564 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61606 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61690 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61730 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61756 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61798 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61819 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61839 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61899 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61924 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61965 00:29:47.572 Removing: /var/run/dpdk/spdk_pid61992 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62010 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62035 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62062 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62078 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62105 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62127 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62167 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62211 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62233 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62273 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62295 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62314 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62367 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62396 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62429 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62454 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62468 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62493 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62513 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62532 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62552 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62571 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62665 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62747 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62910 00:29:47.572 Removing: /var/run/dpdk/spdk_pid62956 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63013 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63044 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63073 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63101 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63150 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63172 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63262 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63312 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63388 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63514 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63599 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63651 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63773 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63828 00:29:47.572 Removing: /var/run/dpdk/spdk_pid63878 00:29:47.572 Removing: /var/run/dpdk/spdk_pid64139 00:29:47.832 Removing: /var/run/dpdk/spdk_pid64257 00:29:47.832 Removing: /var/run/dpdk/spdk_pid64298 00:29:47.832 Removing: /var/run/dpdk/spdk_pid64334 00:29:47.832 Removing: /var/run/dpdk/spdk_pid64379 00:29:47.832 Removing: /var/run/dpdk/spdk_pid64431 00:29:47.832 Removing: /var/run/dpdk/spdk_pid64476 00:29:47.832 Removing: /var/run/dpdk/spdk_pid64525 00:29:47.832 Removing: /var/run/dpdk/spdk_pid64943 00:29:47.832 Removing: /var/run/dpdk/spdk_pid64982 00:29:47.832 Removing: /var/run/dpdk/spdk_pid65361 00:29:47.832 Removing: /var/run/dpdk/spdk_pid65845 00:29:47.832 Removing: /var/run/dpdk/spdk_pid66116 00:29:47.832 Removing: /var/run/dpdk/spdk_pid67059 00:29:47.832 Removing: /var/run/dpdk/spdk_pid68024 00:29:47.832 Removing: /var/run/dpdk/spdk_pid68153 00:29:47.832 Removing: /var/run/dpdk/spdk_pid68233 00:29:47.832 Removing: /var/run/dpdk/spdk_pid69698 00:29:47.832 Removing: /var/run/dpdk/spdk_pid70068 00:29:47.832 Removing: /var/run/dpdk/spdk_pid73831 00:29:47.832 Removing: /var/run/dpdk/spdk_pid74241 00:29:47.832 Removing: /var/run/dpdk/spdk_pid74348 00:29:47.832 Removing: /var/run/dpdk/spdk_pid74500 00:29:47.832 Removing: /var/run/dpdk/spdk_pid74542 00:29:47.832 Removing: /var/run/dpdk/spdk_pid74583 00:29:47.832 Removing: /var/run/dpdk/spdk_pid74618 00:29:47.832 Removing: /var/run/dpdk/spdk_pid74742 00:29:47.832 Removing: /var/run/dpdk/spdk_pid74891 00:29:47.832 Removing: /var/run/dpdk/spdk_pid75081 00:29:47.832 Removing: /var/run/dpdk/spdk_pid75182 00:29:47.832 Removing: /var/run/dpdk/spdk_pid75395 00:29:47.832 Removing: /var/run/dpdk/spdk_pid75499 00:29:47.832 Removing: /var/run/dpdk/spdk_pid75607 00:29:47.832 Removing: /var/run/dpdk/spdk_pid75984 00:29:47.832 Removing: /var/run/dpdk/spdk_pid76427 00:29:47.832 Removing: /var/run/dpdk/spdk_pid76428 00:29:47.832 Removing: /var/run/dpdk/spdk_pid76429 00:29:47.832 Removing: /var/run/dpdk/spdk_pid76714 00:29:47.832 Removing: /var/run/dpdk/spdk_pid77001 00:29:47.832 Removing: /var/run/dpdk/spdk_pid77004 00:29:47.832 Removing: /var/run/dpdk/spdk_pid79432 00:29:47.832 Removing: /var/run/dpdk/spdk_pid79435 00:29:47.832 Removing: /var/run/dpdk/spdk_pid79775 00:29:47.832 Removing: /var/run/dpdk/spdk_pid79796 00:29:47.832 Removing: /var/run/dpdk/spdk_pid79811 00:29:47.832 Removing: /var/run/dpdk/spdk_pid79849 00:29:47.832 Removing: /var/run/dpdk/spdk_pid79855 00:29:47.832 Removing: /var/run/dpdk/spdk_pid79944 00:29:47.832 Removing: /var/run/dpdk/spdk_pid79948 00:29:47.832 Removing: /var/run/dpdk/spdk_pid80052 00:29:47.832 Removing: /var/run/dpdk/spdk_pid80066 00:29:47.832 Removing: /var/run/dpdk/spdk_pid80170 00:29:47.832 Removing: /var/run/dpdk/spdk_pid80174 00:29:47.832 Removing: /var/run/dpdk/spdk_pid80633 00:29:47.832 Removing: /var/run/dpdk/spdk_pid80675 00:29:47.832 Removing: /var/run/dpdk/spdk_pid80782 00:29:47.832 Removing: /var/run/dpdk/spdk_pid80849 00:29:47.832 Removing: /var/run/dpdk/spdk_pid81230 00:29:47.832 Removing: /var/run/dpdk/spdk_pid81434 00:29:47.832 Removing: /var/run/dpdk/spdk_pid81885 00:29:47.832 Removing: /var/run/dpdk/spdk_pid82462 00:29:47.832 Removing: /var/run/dpdk/spdk_pid83337 00:29:47.832 Removing: /var/run/dpdk/spdk_pid83989 00:29:47.832 Removing: /var/run/dpdk/spdk_pid83998 00:29:47.832 Removing: /var/run/dpdk/spdk_pid86051 00:29:47.832 Removing: /var/run/dpdk/spdk_pid86118 00:29:47.832 Removing: /var/run/dpdk/spdk_pid86186 00:29:47.832 Removing: /var/run/dpdk/spdk_pid86253 00:29:47.832 Removing: /var/run/dpdk/spdk_pid86387 00:29:47.832 Removing: /var/run/dpdk/spdk_pid86454 00:29:47.832 Removing: /var/run/dpdk/spdk_pid86523 00:29:47.832 Removing: /var/run/dpdk/spdk_pid86590 00:29:47.832 Removing: /var/run/dpdk/spdk_pid86978 00:29:47.832 Removing: /var/run/dpdk/spdk_pid88200 00:29:47.832 Removing: /var/run/dpdk/spdk_pid88356 00:29:47.832 Removing: /var/run/dpdk/spdk_pid88600 00:29:47.832 Removing: /var/run/dpdk/spdk_pid89216 00:29:47.832 Removing: /var/run/dpdk/spdk_pid89387 00:29:47.832 Removing: /var/run/dpdk/spdk_pid89544 00:29:47.832 Removing: /var/run/dpdk/spdk_pid89644 00:29:48.090 Removing: /var/run/dpdk/spdk_pid89801 00:29:48.091 Removing: /var/run/dpdk/spdk_pid89910 00:29:48.091 Removing: /var/run/dpdk/spdk_pid90634 00:29:48.091 Removing: /var/run/dpdk/spdk_pid90670 00:29:48.091 Removing: /var/run/dpdk/spdk_pid90707 00:29:48.091 Removing: /var/run/dpdk/spdk_pid91173 00:29:48.091 Removing: /var/run/dpdk/spdk_pid91204 00:29:48.091 Removing: /var/run/dpdk/spdk_pid91246 00:29:48.091 Removing: /var/run/dpdk/spdk_pid91711 00:29:48.091 Removing: /var/run/dpdk/spdk_pid91728 00:29:48.091 Removing: /var/run/dpdk/spdk_pid91981 00:29:48.091 Removing: /var/run/dpdk/spdk_pid92127 00:29:48.091 Removing: /var/run/dpdk/spdk_pid92145 00:29:48.091 Clean 00:29:48.091 09:07:25 -- common/autotest_common.sh@1451 -- # return 0 00:29:48.091 09:07:25 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:29:48.091 09:07:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:48.091 09:07:25 -- common/autotest_common.sh@10 -- # set +x 00:29:48.091 09:07:25 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:29:48.091 09:07:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:48.091 09:07:25 -- common/autotest_common.sh@10 -- # set +x 00:29:48.091 09:07:26 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:48.091 09:07:26 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:48.091 09:07:26 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:48.091 09:07:26 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:29:48.091 09:07:26 -- spdk/autotest.sh@394 -- # hostname 00:29:48.091 09:07:26 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:48.349 geninfo: WARNING: invalid characters removed from testname! 00:30:14.953 09:07:49 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:14.953 09:07:52 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:17.489 09:07:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:20.017 09:07:57 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:22.548 09:08:00 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:25.082 09:08:02 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:27.615 09:08:05 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:27.615 09:08:05 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:30:27.615 09:08:05 -- common/autotest_common.sh@1681 -- $ lcov --version 00:30:27.615 09:08:05 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:30:27.615 09:08:05 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:30:27.615 09:08:05 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:30:27.615 09:08:05 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:30:27.615 09:08:05 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:30:27.615 09:08:05 -- scripts/common.sh@336 -- $ IFS=.-: 00:30:27.615 09:08:05 -- scripts/common.sh@336 -- $ read -ra ver1 00:30:27.615 09:08:05 -- scripts/common.sh@337 -- $ IFS=.-: 00:30:27.615 09:08:05 -- scripts/common.sh@337 -- $ read -ra ver2 00:30:27.615 09:08:05 -- scripts/common.sh@338 -- $ local 'op=<' 00:30:27.615 09:08:05 -- scripts/common.sh@340 -- $ ver1_l=2 00:30:27.615 09:08:05 -- scripts/common.sh@341 -- $ ver2_l=1 00:30:27.615 09:08:05 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:30:27.615 09:08:05 -- scripts/common.sh@344 -- $ case "$op" in 00:30:27.615 09:08:05 -- scripts/common.sh@345 -- $ : 1 00:30:27.615 09:08:05 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:30:27.615 09:08:05 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.615 09:08:05 -- scripts/common.sh@365 -- $ decimal 1 00:30:27.615 09:08:05 -- scripts/common.sh@353 -- $ local d=1 00:30:27.615 09:08:05 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:30:27.615 09:08:05 -- scripts/common.sh@355 -- $ echo 1 00:30:27.615 09:08:05 -- scripts/common.sh@365 -- $ ver1[v]=1 00:30:27.615 09:08:05 -- scripts/common.sh@366 -- $ decimal 2 00:30:27.615 09:08:05 -- scripts/common.sh@353 -- $ local d=2 00:30:27.615 09:08:05 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:30:27.615 09:08:05 -- scripts/common.sh@355 -- $ echo 2 00:30:27.615 09:08:05 -- scripts/common.sh@366 -- $ ver2[v]=2 00:30:27.615 09:08:05 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:30:27.615 09:08:05 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:30:27.615 09:08:05 -- scripts/common.sh@368 -- $ return 0 00:30:27.615 09:08:05 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.615 09:08:05 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:30:27.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.615 --rc genhtml_branch_coverage=1 00:30:27.615 --rc genhtml_function_coverage=1 00:30:27.615 --rc genhtml_legend=1 00:30:27.615 --rc geninfo_all_blocks=1 00:30:27.615 --rc geninfo_unexecuted_blocks=1 00:30:27.615 00:30:27.615 ' 00:30:27.615 09:08:05 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:30:27.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.615 --rc genhtml_branch_coverage=1 00:30:27.615 --rc genhtml_function_coverage=1 00:30:27.615 --rc genhtml_legend=1 00:30:27.615 --rc geninfo_all_blocks=1 00:30:27.615 --rc geninfo_unexecuted_blocks=1 00:30:27.615 00:30:27.615 ' 00:30:27.615 09:08:05 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:30:27.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.615 --rc genhtml_branch_coverage=1 00:30:27.615 --rc genhtml_function_coverage=1 00:30:27.615 --rc genhtml_legend=1 00:30:27.615 --rc geninfo_all_blocks=1 00:30:27.615 --rc geninfo_unexecuted_blocks=1 00:30:27.615 00:30:27.615 ' 00:30:27.615 09:08:05 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:30:27.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.615 --rc genhtml_branch_coverage=1 00:30:27.615 --rc genhtml_function_coverage=1 00:30:27.615 --rc genhtml_legend=1 00:30:27.615 --rc geninfo_all_blocks=1 00:30:27.615 --rc geninfo_unexecuted_blocks=1 00:30:27.615 00:30:27.615 ' 00:30:27.615 09:08:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:27.615 09:08:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:30:27.615 09:08:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:27.615 09:08:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.615 09:08:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.615 09:08:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.615 09:08:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.615 09:08:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.615 09:08:05 -- paths/export.sh@5 -- $ export PATH 00:30:27.615 09:08:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.615 09:08:05 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:30:27.615 09:08:05 -- common/autobuild_common.sh@479 -- $ date +%s 00:30:27.615 09:08:05 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727514485.XXXXXX 00:30:27.615 09:08:05 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727514485.zhLPjS 00:30:27.615 09:08:05 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:30:27.615 09:08:05 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:30:27.615 09:08:05 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:30:27.615 09:08:05 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:30:27.615 09:08:05 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:30:27.615 09:08:05 -- common/autobuild_common.sh@495 -- $ get_config_params 00:30:27.615 09:08:05 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:30:27.615 09:08:05 -- common/autotest_common.sh@10 -- $ set +x 00:30:27.615 09:08:05 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:30:27.615 09:08:05 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:30:27.615 09:08:05 -- pm/common@17 -- $ local monitor 00:30:27.615 09:08:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:27.615 09:08:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:27.615 09:08:05 -- pm/common@25 -- $ sleep 1 00:30:27.615 09:08:05 -- pm/common@21 -- $ date +%s 00:30:27.615 09:08:05 -- pm/common@21 -- $ date +%s 00:30:27.615 09:08:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727514485 00:30:27.615 09:08:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727514485 00:30:27.615 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727514485_collect-vmstat.pm.log 00:30:27.615 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727514485_collect-cpu-load.pm.log 00:30:28.552 09:08:06 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:30:28.552 09:08:06 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:30:28.552 09:08:06 -- spdk/autopackage.sh@14 -- $ timing_finish 00:30:28.552 09:08:06 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:28.552 09:08:06 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:28.552 09:08:06 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:28.552 09:08:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:28.552 09:08:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:28.552 09:08:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:28.552 09:08:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:28.552 09:08:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:30:28.552 09:08:06 -- pm/common@44 -- $ pid=93933 00:30:28.552 09:08:06 -- pm/common@50 -- $ kill -TERM 93933 00:30:28.552 09:08:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:28.552 09:08:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:30:28.552 09:08:06 -- pm/common@44 -- $ pid=93934 00:30:28.552 09:08:06 -- pm/common@50 -- $ kill -TERM 93934 00:30:28.552 + [[ -n 5254 ]] 00:30:28.552 + sudo kill 5254 00:30:28.561 [Pipeline] } 00:30:28.578 [Pipeline] // timeout 00:30:28.584 [Pipeline] } 00:30:28.599 [Pipeline] // stage 00:30:28.605 [Pipeline] } 00:30:28.621 [Pipeline] // catchError 00:30:28.631 [Pipeline] stage 00:30:28.634 [Pipeline] { (Stop VM) 00:30:28.647 [Pipeline] sh 00:30:28.929 + vagrant halt 00:30:32.219 ==> default: Halting domain... 00:30:38.833 [Pipeline] sh 00:30:39.113 + vagrant destroy -f 00:30:41.647 ==> default: Removing domain... 00:30:41.918 [Pipeline] sh 00:30:42.198 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:30:42.207 [Pipeline] } 00:30:42.222 [Pipeline] // stage 00:30:42.227 [Pipeline] } 00:30:42.240 [Pipeline] // dir 00:30:42.245 [Pipeline] } 00:30:42.258 [Pipeline] // wrap 00:30:42.264 [Pipeline] } 00:30:42.275 [Pipeline] // catchError 00:30:42.284 [Pipeline] stage 00:30:42.286 [Pipeline] { (Epilogue) 00:30:42.298 [Pipeline] sh 00:30:42.579 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:47.858 [Pipeline] catchError 00:30:47.860 [Pipeline] { 00:30:47.871 [Pipeline] sh 00:30:48.152 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:48.410 Artifacts sizes are good 00:30:48.419 [Pipeline] } 00:30:48.433 [Pipeline] // catchError 00:30:48.444 [Pipeline] archiveArtifacts 00:30:48.451 Archiving artifacts 00:30:48.625 [Pipeline] cleanWs 00:30:48.637 [WS-CLEANUP] Deleting project workspace... 00:30:48.637 [WS-CLEANUP] Deferred wipeout is used... 00:30:48.643 [WS-CLEANUP] done 00:30:48.645 [Pipeline] } 00:30:48.660 [Pipeline] // stage 00:30:48.666 [Pipeline] } 00:30:48.679 [Pipeline] // node 00:30:48.685 [Pipeline] End of Pipeline 00:30:48.735 Finished: SUCCESS